Hybrid Action Based Reinforcement Learning for Multi-Objective Compatible Autonomous Driving
- URL: http://arxiv.org/abs/2501.08096v1
- Date: Tue, 14 Jan 2025 13:10:13 GMT
- Title: Hybrid Action Based Reinforcement Learning for Multi-Objective Compatible Autonomous Driving
- Authors: Guizhe Jin, Zhuoren Li, Bo Leng, Wei Han, Lu Xiong, Chen Sun,
- Abstract summary: Reinforcement Learning (RL) has shown excellent performance in solving decision-making and control problems of autonomous driving.
Driving is a multi-attribute problem, leading to challenges in achieving multi-objective compatibility for current RL methods.
We propose a Multi-objective Ensemble-Critic reinforcement learning method with Hybrid Parametrized Action for multi-objective compatible autonomous driving.
- Score: 9.39122455540358
- License:
- Abstract: Reinforcement Learning (RL) has shown excellent performance in solving decision-making and control problems of autonomous driving, which is increasingly applied in diverse driving scenarios. However, driving is a multi-attribute problem, leading to challenges in achieving multi-objective compatibility for current RL methods, especially in both policy execution and policy iteration. On the one hand, the common action space structure with single action type limits driving flexibility or results in large behavior fluctuations during policy execution. On the other hand, the multi-attribute weighted single reward function result in the agent's disproportionate attention to certain objectives during policy iterations. To this end, we propose a Multi-objective Ensemble-Critic reinforcement learning method with Hybrid Parametrized Action for multi-objective compatible autonomous driving. Specifically, a parameterized action space is constructed to generate hybrid driving actions, combining both abstract guidance and concrete control commands. A multi-objective critics architecture is constructed considering multiple attribute rewards, to ensure simultaneously focusing on different driving objectives. Additionally, uncertainty-based exploration strategy is introduced to help the agent faster approach viable driving policy. The experimental results in both the simulated traffic environment and the HighD dataset demonstrate that our method can achieve multi-objective compatible autonomous driving in terms of driving efficiency, action consistency, and safety. It enhances the general performance of the driving while significantly increasing training efficiency.
Related papers
- TeLL-Drive: Enhancing Autonomous Driving with Teacher LLM-Guided Deep Reinforcement Learning [61.33599727106222]
TeLL-Drive is a hybrid framework that integrates a Teacher LLM to guide an attention-based Student DRL policy.
A self-attention mechanism then fuses these strategies with the DRL agent's exploration, accelerating policy convergence and boosting robustness.
arXiv Detail & Related papers (2025-02-03T14:22:03Z) - Towards Interactive and Learnable Cooperative Driving Automation: a Large Language Model-Driven Decision-Making Framework [79.088116316919]
Connected Autonomous Vehicles (CAVs) have begun to open road testing around the world, but their safety and efficiency performance in complex scenarios is still not satisfactory.
This paper proposes CoDrivingLLM, an interactive and learnable LLM-driven cooperative driving framework.
arXiv Detail & Related papers (2024-09-19T14:36:00Z) - Parameterized Decision-making with Multi-modal Perception for Autonomous
Driving [12.21578713219778]
We propose a parameterized decision-making framework with multi-modal perception based on deep reinforcement learning, called AUTO.
A hybrid reward function takes into account aspects of safety, traffic efficiency, passenger comfort, and impact to guide the framework to generate optimal actions.
arXiv Detail & Related papers (2023-12-19T08:27:02Z) - Multi-Agent Deep Reinforcement Learning for Cooperative and Competitive
Autonomous Vehicles using AutoDRIVE Ecosystem [1.1893676124374688]
We introduce AutoDRIVE Ecosystem as an enabler to develop physically accurate and graphically realistic digital twins of Nigel and F1TENTH.
We first investigate an intersection problem using a set of cooperative vehicles (Nigel) that share limited state information with each other in single as well as multi-agent learning settings.
We then investigate an adversarial head-to-head autonomous racing problem using a different set of vehicles (F1TENTH) in a multi-agent learning setting using an individual policy approach.
arXiv Detail & Related papers (2023-09-18T02:43:59Z) - Robust Driving Policy Learning with Guided Meta Reinforcement Learning [49.860391298275616]
We introduce an efficient method to train diverse driving policies for social vehicles as a single meta-policy.
By randomizing the interaction-based reward functions of social vehicles, we can generate diverse objectives and efficiently train the meta-policy.
We propose a training strategy to enhance the robustness of the ego vehicle's driving policy using the environment where social vehicles are controlled by the learned meta-policy.
arXiv Detail & Related papers (2023-07-19T17:42:36Z) - Visual Exemplar Driven Task-Prompting for Unified Perception in
Autonomous Driving [100.3848723827869]
We present an effective multi-task framework, VE-Prompt, which introduces visual exemplars via task-specific prompting.
Specifically, we generate visual exemplars based on bounding boxes and color-based markers, which provide accurate visual appearances of target categories.
We bridge transformer-based encoders and convolutional layers for efficient and accurate unified perception in autonomous driving.
arXiv Detail & Related papers (2023-03-03T08:54:06Z) - Unified Automatic Control of Vehicular Systems with Reinforcement
Learning [64.63619662693068]
This article contributes a streamlined methodology for vehicular microsimulation.
It discovers high performance control strategies with minimal manual design.
The study reveals numerous emergent behaviors resembling wave mitigation, traffic signaling, and ramp metering.
arXiv Detail & Related papers (2022-07-30T16:23:45Z) - Multi-Modal Fusion Transformer for End-to-End Autonomous Driving [59.60483620730437]
We propose TransFuser, a novel Multi-Modal Fusion Transformer, to integrate image and LiDAR representations using attention.
Our approach achieves state-of-the-art driving performance while reducing collisions by 76% compared to geometry-based fusion.
arXiv Detail & Related papers (2021-04-19T11:48:13Z) - Behaviorally Diverse Traffic Simulation via Reinforcement Learning [16.99423598448411]
This paper introduces an easily-tunable policy generation algorithm for autonomous driving agents.
The proposed algorithm balances diversity and driving skills by leveraging the representation and exploration abilities of deep reinforcement learning.
We experimentally show the effectiveness of our methods on several challenging intersection scenes.
arXiv Detail & Related papers (2020-11-11T12:49:11Z) - MIDAS: Multi-agent Interaction-aware Decision-making with Adaptive
Strategies for Urban Autonomous Navigation [22.594295184455]
This paper builds a reinforcement learning-based method named MIDAS where an ego-agent learns to affect the control actions of other cars.
MIDAS is validated using extensive experiments and we show that it (i) can work across different road geometries, (ii) is robust to changes in the driving policies of external agents, and (iv) is more efficient and safer than existing approaches to interaction-aware decision-making.
arXiv Detail & Related papers (2020-08-17T04:34:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.