Learning to Model Diverse Driving Behaviors in Highly Interactive
Autonomous Driving Scenarios with Multi-Agent Reinforcement Learning
- URL: http://arxiv.org/abs/2402.13481v1
- Date: Wed, 21 Feb 2024 02:44:33 GMT
- Title: Learning to Model Diverse Driving Behaviors in Highly Interactive
Autonomous Driving Scenarios with Multi-Agent Reinforcement Learning
- Authors: Liu Weiwei, Hu Wenxuan, Jing Wei, Lei Lanxin, Gao Lingping and Liu
Yong
- Abstract summary: Multi-Agent Reinforcement Learning (MARL) has shown impressive results in many driving scenarios.
However, the performance of these trained policies can be impacted when faced with diverse driving styles and personalities.
We introduce the Personality Modeling Network (PeMN), which includes a cooperation value function and personality parameters.
- Score: 0.751422531359304
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Autonomous vehicles trained through Multi-Agent Reinforcement Learning (MARL)
have shown impressive results in many driving scenarios. However, the
performance of these trained policies can be impacted when faced with diverse
driving styles and personalities, particularly in highly interactive
situations. This is because conventional MARL algorithms usually operate under
the assumption of fully cooperative behavior among all agents and focus on
maximizing team rewards during training. To address this issue, we introduce
the Personality Modeling Network (PeMN), which includes a cooperation value
function and personality parameters to model the varied interactions in
high-interactive scenarios. The PeMN also enables the training of a background
traffic flow with diverse behaviors, thereby improving the performance and
generalization of the ego vehicle. Our extensive experimental studies, which
incorporate different personality parameters in high-interactive driving
scenarios, demonstrate that the personality parameters effectively model
diverse driving styles and that policies trained with PeMN demonstrate better
generalization compared to traditional MARL methods.
Related papers
- Importance Sampling-Guided Meta-Training for Intelligent Agents in Highly Interactive Environments [43.144056801987595]
This study introduces a novel training framework that integrates guided meta RL with importance sampling (IS) to optimize training distributions.
By estimating a naturalistic distribution from real-world datasets, the framework ensures a balanced focus across common and extreme driving scenarios.
arXiv Detail & Related papers (2024-07-22T17:57:12Z) - PersLLM: A Personified Training Approach for Large Language Models [66.16513246245401]
We propose PersLLM, integrating psychology-grounded principles of personality: social practice, consistency, and dynamic development.
We incorporate personality traits directly into the model parameters, enhancing the model's resistance to induction, promoting consistency, and supporting the dynamic evolution of personality.
arXiv Detail & Related papers (2024-07-17T08:13:22Z) - Probing Multimodal LLMs as World Models for Driving [72.18727651074563]
We look at the application of Multimodal Large Language Models (MLLMs) in autonomous driving.
Despite advances in models like GPT-4o, their performance in complex driving environments remains largely unexplored.
arXiv Detail & Related papers (2024-05-09T17:52:42Z) - Delving into Multi-modal Multi-task Foundation Models for Road Scene Understanding: From Learning Paradigm Perspectives [56.2139730920855]
We present a systematic analysis of MM-VUFMs specifically designed for road scenes.
Our objective is to provide a comprehensive overview of common practices, referring to task-specific models, unified multi-modal models, unified multi-task models, and foundation model prompting techniques.
We provide insights into key challenges and future trends, such as closed-loop driving systems, interpretability, embodied driving agents, and world models.
arXiv Detail & Related papers (2024-02-05T12:47:09Z) - Looking for a better fit? An Incremental Learning Multimodal Object
Referencing Framework adapting to Individual Drivers [0.0]
The rapid advancement of the automotive industry has rendered traditional methods of vehicle interaction, such as touch-based and voice command systems, inadequate for a widening range of non-driving related tasks, such as referencing objects outside of the vehicle.
We propose textitIcRegress, a novel regression-based incremental learning approach that adapts to changing behavior and the unique characteristics of drivers engaged in the dual task of driving and referencing objects.
arXiv Detail & Related papers (2024-01-29T12:48:56Z) - Interactive Autonomous Navigation with Internal State Inference and
Interactivity Estimation [58.21683603243387]
We propose three auxiliary tasks with relational-temporal reasoning and integrate them into the standard Deep Learning framework.
These auxiliary tasks provide additional supervision signals to infer the behavior patterns other interactive agents.
Our approach achieves robust and state-of-the-art performance in terms of standard evaluation metrics.
arXiv Detail & Related papers (2023-11-27T18:57:42Z) - Multi-Agent Imitation Learning with Copulas [102.27052968901894]
Multi-agent imitation learning aims to train multiple agents to perform tasks from demonstrations by learning a mapping between observations and actions.
In this paper, we propose to use copula, a powerful statistical tool for capturing dependence among random variables, to explicitly model the correlation and coordination in multi-agent systems.
Our proposed model is able to separately learn marginals that capture the local behavioral patterns of each individual agent, as well as a copula function that solely and fully captures the dependence structure among agents.
arXiv Detail & Related papers (2021-07-10T03:49:41Z) - Pedestrian Behavior Prediction via Multitask Learning and Categorical
Interaction Modeling [13.936894582450734]
We propose a multitask learning framework that simultaneously predicts trajectories and actions of pedestrians by relying on multimodal data.
We show that our model achieves state-of-the-art performance and improves trajectory and action prediction by up to 22% and 6% respectively.
arXiv Detail & Related papers (2020-12-06T15:57:11Z) - Behaviorally Diverse Traffic Simulation via Reinforcement Learning [16.99423598448411]
This paper introduces an easily-tunable policy generation algorithm for autonomous driving agents.
The proposed algorithm balances diversity and driving skills by leveraging the representation and exploration abilities of deep reinforcement learning.
We experimentally show the effectiveness of our methods on several challenging intersection scenes.
arXiv Detail & Related papers (2020-11-11T12:49:11Z) - SMARTS: Scalable Multi-Agent Reinforcement Learning Training School for
Autonomous Driving [96.50297622371457]
Multi-agent interaction is a fundamental aspect of autonomous driving in the real world.
Despite more than a decade of research and development, the problem of how to interact with diverse road users in diverse scenarios remains largely unsolved.
We develop a dedicated simulation platform called SMARTS that generates diverse and competent driving interactions.
arXiv Detail & Related papers (2020-10-19T18:26:10Z) - Parallel Knowledge Transfer in Multi-Agent Reinforcement Learning [0.2538209532048867]
This paper proposes a novel knowledge transfer framework in MARL, PAT (Parallel Attentional Transfer)
We design two acting modes in PAT, student mode and self-learning mode.
When agents are unfamiliar with the environment, the shared attention mechanism in student mode effectively selects learning knowledge from other agents to decide agents' actions.
arXiv Detail & Related papers (2020-03-29T17:42:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.