Constrained Optimal Fuel Consumption of HEV: A Constrained Reinforcement Learning Approach
- URL: http://arxiv.org/abs/2403.07503v2
- Date: Tue, 2 Apr 2024 11:20:22 GMT
- Title: Constrained Optimal Fuel Consumption of HEV: A Constrained Reinforcement Learning Approach
- Authors: Shuchang Yan,
- Abstract summary: This work provides the mathematical expression of constrained optimal fuel consumption (COFC) from the perspective of constrained reinforcement learning (CRL)
Two mainstream approaches of CRL, constrained variational policy optimization (CVPO) and Lagrangian-based approaches, are utilized for the first time to obtain the vehicle's minimum fuel consumption.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Hybrid electric vehicles (HEVs) are becoming increasingly popular because they can better combine the working characteristics of internal combustion engines and electric motors. However, the minimum fuel consumption of an HEV for a battery electrical balance case under a specific assembly condition and a specific speed curve still needs to be clarified in academia and industry. Regarding this problem, this work provides the mathematical expression of constrained optimal fuel consumption (COFC) from the perspective of constrained reinforcement learning (CRL) for the first time globally. Also, two mainstream approaches of CRL, constrained variational policy optimization (CVPO) and Lagrangian-based approaches, are utilized for the first time to obtain the vehicle's minimum fuel consumption under the battery electrical balance condition. We conduct case studies on the well-known Prius TOYOTA hybrid system (THS) under the NEDC condition; we give vital steps to implement CRL approaches and compare the performance between the CVPO and Lagrangian-based approaches. Our case study found that CVPO and Lagrangian-based approaches can obtain the lowest fuel consumption while maintaining the SOC balance constraint. The CVPO approach converges stable, but the Lagrangian-based approach can obtain the lowest fuel consumption at 3.95 L/100km, though with more significant oscillations. This result verifies the effectiveness of our proposed CRL approaches to the COFC problem.
Related papers
- Constrained Optimal Fuel Consumption of HEV:Considering the Observational Perturbation [12.936592572736908]
We aim to minimize fuel consumption while maintaining SOC balance under observational perturbations in SOC and speed.
This work first worldwide uses seven training approaches to solve the COFC problem under five types of perturbations.
arXiv Detail & Related papers (2024-10-28T10:45:42Z) - EcoFollower: An Environment-Friendly Car Following Model Considering Fuel Consumption [9.42048156323799]
This study introduces EcoFollower, a novel eco-car-following model developed using reinforcement learning (RL) to optimize fuel consumption in car-following scenarios.
The model achieved a significant reduction in fuel consumption, lowering it by 10.42% compared to actual driving scenarios.
arXiv Detail & Related papers (2024-07-22T16:48:37Z) - Hybrid Reinforcement Learning for Optimizing Pump Sustainability in
Real-World Water Distribution Networks [55.591662978280894]
This article addresses the pump-scheduling optimization problem to enhance real-time control of real-world water distribution networks (WDNs)
Our primary objectives are to adhere to physical operational constraints while reducing energy consumption and operational costs.
Traditional optimization techniques, such as evolution-based and genetic algorithms, often fall short due to their lack of convergence guarantees.
arXiv Detail & Related papers (2023-10-13T21:26:16Z) - CCE: Sample Efficient Sparse Reward Policy Learning for Robotic Navigation via Confidence-Controlled Exploration [72.24964965882783]
Confidence-Controlled Exploration (CCE) is designed to enhance the training sample efficiency of reinforcement learning algorithms for sparse reward settings such as robot navigation.
CCE is based on a novel relationship we provide between gradient estimation and policy entropy.
We demonstrate through simulated and real-world experiments that CCE outperforms conventional methods that employ constant trajectory lengths and entropy regularization.
arXiv Detail & Related papers (2023-06-09T18:45:15Z) - Driver Assistance Eco-driving and Transmission Control with Deep
Reinforcement Learning [2.064612766965483]
In this paper, a model-free deep reinforcement learning (RL) control agent is proposed for active Eco-driving assistance.
It trades-off fuel consumption against other driver-accommodation objectives, and learns optimal traction torque and transmission shifting policies from experience.
It shows superior performance in minimizing fuel consumption compared to a baseline controller that has full knowledge of fuel-efficiency tables.
arXiv Detail & Related papers (2022-12-15T02:52:07Z) - Data-Driven Chance Constrained AC-OPF using Hybrid Sparse Gaussian
Processes [57.70237375696411]
The paper proposes a fast data-driven setup that uses the sparse and hybrid Gaussian processes (GP) framework to model the power flow equations with input uncertainty.
We advocate the efficiency of the proposed approach by a numerical study over multiple IEEE test cases showing up to two times faster and more accurate solutions.
arXiv Detail & Related papers (2022-08-30T09:27:59Z) - Data-Driven Stochastic AC-OPF using Gaussian Processes [54.94701604030199]
Integrating a significant amount of renewables into a power grid is probably the most a way to reduce carbon emissions from power grids slow down climate change.
This paper presents an alternative data-driven approach based on the AC power flow equations that can incorporate uncertainty inputs.
The GP approach learns a simple yet non-constrained data-driven approach to close this gap to the AC power flow equations.
arXiv Detail & Related papers (2022-07-21T23:02:35Z) - A new Hyper-heuristic based on Adaptive Simulated Annealing and
Reinforcement Learning for the Capacitated Electric Vehicle Routing Problem [9.655068751758952]
Electric vehicles (EVs) have been adopted in urban areas to reduce environmental pollution and global warming.
There are still deficiencies in routing the trajectories of last-mile logistics that continue to impact social and economic sustainability.
This paper proposes a hyper-heuristic approach called Hyper-heuristic Adaptive Simulated Annealing with Reinforcement Learning.
arXiv Detail & Related papers (2022-06-07T11:10:38Z) - An Energy Consumption Model for Electrical Vehicle Networks via Extended
Federated-learning [50.85048976506701]
This paper proposes a novel solution to range anxiety based on a federated-learning model.
It is capable of estimating battery consumption and providing energy-efficient route planning for vehicle networks.
arXiv Detail & Related papers (2021-11-13T15:03:44Z) - Safe Model-based Off-policy Reinforcement Learning for Eco-Driving in
Connected and Automated Hybrid Electric Vehicles [3.5259944260228977]
This work proposes a Safe Off-policy Model-Based Reinforcement Learning algorithm for the eco-driving problem.
The proposed algorithm leads to a policy with a higher average speed and a better fuel economy compared to the model-free agent.
arXiv Detail & Related papers (2021-05-25T03:41:29Z) - Risk-Aware Energy Scheduling for Edge Computing with Microgrid: A
Multi-Agent Deep Reinforcement Learning Approach [82.6692222294594]
We study a risk-aware energy scheduling problem for a microgrid-powered MEC network.
We derive the solution by applying a multi-agent deep reinforcement learning (MADRL)-based advantage actor-critic (A3C) algorithm with shared neural networks.
arXiv Detail & Related papers (2020-02-21T02:14:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.