Introducing Risk Shadowing For Decisive and Comfortable Behavior
Planning
- URL: http://arxiv.org/abs/2307.10714v1
- Date: Thu, 20 Jul 2023 09:16:01 GMT
- Title: Introducing Risk Shadowing For Decisive and Comfortable Behavior
Planning
- Authors: Tim Puphal and Julian Eggert
- Abstract summary: We develop risk shadowing, a situation understanding method that allows us to go beyond single interactions.
We show that using risk shadowing as an upstream filter module for a behavior planner allows to plan more decisive and comfortable driving strategies.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We consider the problem of group interactions in urban driving.
State-of-the-art behavior planners for self-driving cars mostly consider each
single agent-to-agent interaction separately in a cost function in order to
find an optimal behavior for the ego agent, such as not colliding with any of
the other agents. In this paper, we develop risk shadowing, a situation
understanding method that allows us to go beyond single interactions by
analyzing group interactions between three agents. Concretely, the presented
method can find out which first other agent does not need to be considered in
the behavior planner of an ego agent, because this first other agent cannot
reach the ego agent due to a second other agent obstructing its way. In
experiments, we show that using risk shadowing as an upstream filter module for
a behavior planner allows to plan more decisive and comfortable driving
strategies than state of the art, given that safety is ensured in these cases.
The usability of the approach is demonstrated for different intersection
scenarios and longitudinal driving.
Related papers
- Learning responsibility allocations for multi-agent interactions: A differentiable optimization approach with control barrier functions [12.074590482085831]
We seek to codify factors governing safe multi-agent interactions via the lens of responsibility.
We propose a data-driven modeling approach based on control barrier functions and differentiable optimization.
arXiv Detail & Related papers (2024-10-09T20:20:41Z) - PsySafe: A Comprehensive Framework for Psychological-based Attack, Defense, and Evaluation of Multi-agent System Safety [70.84902425123406]
Multi-agent systems, when enhanced with Large Language Models (LLMs), exhibit profound capabilities in collective intelligence.
However, the potential misuse of this intelligence for malicious purposes presents significant risks.
We propose a framework (PsySafe) grounded in agent psychology, focusing on identifying how dark personality traits in agents can lead to risky behaviors.
Our experiments reveal several intriguing phenomena, such as the collective dangerous behaviors among agents, agents' self-reflection when engaging in dangerous behavior, and the correlation between agents' psychological assessments and dangerous behaviors.
arXiv Detail & Related papers (2024-01-22T12:11:55Z) - Interactive Autonomous Navigation with Internal State Inference and
Interactivity Estimation [58.21683603243387]
We propose three auxiliary tasks with relational-temporal reasoning and integrate them into the standard Deep Learning framework.
These auxiliary tasks provide additional supervision signals to infer the behavior patterns other interactive agents.
Our approach achieves robust and state-of-the-art performance in terms of standard evaluation metrics.
arXiv Detail & Related papers (2023-11-27T18:57:42Z) - Importance Filtering with Risk Models for Complex Driving Situations [1.4699455652461728]
Self-driving cars face complex driving situations with a large amount of agents when moving in crowded cities.
Some of the agents are actually not influencing the behavior of the self-driving car.
filtering out unimportant agents would inherently simplify the behavior or motion planning task for the system.
arXiv Detail & Related papers (2023-03-13T09:03:10Z) - Safe adaptation in multiagent competition [48.02377041620857]
In multiagent competitive scenarios, ego-agents may have to adapt to new opponents with previously unseen behaviors.
As the ego-agent updates its own behavior to exploit the opponent, its own behavior could become more exploitable.
We develop a safe adaptation approach in which the ego-agent is trained against a regularized opponent model.
arXiv Detail & Related papers (2022-03-14T23:53:59Z) - Diversifying Agent's Behaviors in Interactive Decision Models [11.125175635860169]
Modelling other agents' behaviors plays an important role in decision models for interactions among multiple agents.
In this article, we investigate into diversifying behaviors of other agents in the subject agent's decision model prior to their interactions.
arXiv Detail & Related papers (2022-03-06T23:05:00Z) - Deep Structured Reactive Planning [94.92994828905984]
We propose a novel data-driven, reactive planning objective for self-driving vehicles.
We show that our model outperforms a non-reactive variant in successfully completing highly complex maneuvers.
arXiv Detail & Related papers (2021-01-18T01:43:36Z) - Learning Latent Representations to Influence Multi-Agent Interaction [65.44092264843538]
We propose a reinforcement learning-based framework for learning latent representations of an agent's policy.
We show that our approach outperforms the alternatives and learns to influence the other agent.
arXiv Detail & Related papers (2020-11-12T19:04:26Z) - MIDAS: Multi-agent Interaction-aware Decision-making with Adaptive
Strategies for Urban Autonomous Navigation [22.594295184455]
This paper builds a reinforcement learning-based method named MIDAS where an ego-agent learns to affect the control actions of other cars.
MIDAS is validated using extensive experiments and we show that it (i) can work across different road geometries, (ii) is robust to changes in the driving policies of external agents, and (iv) is more efficient and safer than existing approaches to interaction-aware decision-making.
arXiv Detail & Related papers (2020-08-17T04:34:25Z) - Safe Reinforcement Learning via Curriculum Induction [94.67835258431202]
In safety-critical applications, autonomous agents may need to learn in an environment where mistakes can be very costly.
Existing safe reinforcement learning methods make an agent rely on priors that let it avoid dangerous situations.
This paper presents an alternative approach inspired by human teaching, where an agent learns under the supervision of an automatic instructor.
arXiv Detail & Related papers (2020-06-22T10:48:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.