Enhancing Safety for Autonomous Agents in Partly Concealed Urban Traffic Environments Through Representation-Based Shielding
- URL: http://arxiv.org/abs/2407.04343v1
- Date: Fri, 5 Jul 2024 08:34:49 GMT
- Title: Enhancing Safety for Autonomous Agents in Partly Concealed Urban Traffic Environments Through Representation-Based Shielding
- Authors: Pierre Haritz, David Wanke, Thomas Liebig,
- Abstract summary: We propose a novel state representation for Reinforcement Learning (RL) agents centered around the information perceivable by an autonomous agent.
Our findings pave the way for more robust and reliable autonomous navigation strategies.
- Score: 2.9685635948300004
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Navigating unsignalized intersections in urban environments poses a complex challenge for self-driving vehicles, where issues such as view obstructions, unpredictable pedestrian crossings, and diverse traffic participants demand a great focus on crash prevention. In this paper, we propose a novel state representation for Reinforcement Learning (RL) agents centered around the information perceivable by an autonomous agent, enabling the safe navigation of previously uncharted road maps. Our approach surpasses several baseline models by a sig nificant margin in terms of safety and energy consumption metrics. These improvements are achieved while maintaining a competitive average travel speed. Our findings pave the way for more robust and reliable autonomous navigation strategies, promising safer and more efficient urban traffic environments.
Related papers
- GPT-Augmented Reinforcement Learning with Intelligent Control for Vehicle Dispatching [82.19172267487998]
GARLIC: a framework of GPT-Augmented Reinforcement Learning with Intelligent Control for vehicle dispatching.
This paper introduces GARLIC: a framework of GPT-Augmented Reinforcement Learning with Intelligent Control for vehicle dispatching.
arXiv Detail & Related papers (2024-08-19T08:23:38Z) - Agent-Agnostic Centralized Training for Decentralized Multi-Agent Cooperative Driving [17.659812774579756]
We propose an asymmetric actor-critic model that learns decentralized cooperative driving policies for autonomous vehicles.
By employing attention neural networks with masking, our approach efficiently manages real-world traffic dynamics and partial observability.
arXiv Detail & Related papers (2024-03-18T16:13:02Z) - Reinforcement Learning with Latent State Inference for Autonomous On-ramp Merging under Observation Delay [6.0111084468944]
We introduce the Lane-keeping, Lane-changing with Latent-state Inference and Safety Controller (L3IS) agent.
L3IS is designed to perform the on-ramp merging task safely without comprehensive knowledge about surrounding vehicles' intents or driving styles.
We present an augmentation of this agent called AL3IS that accounts for observation delays, allowing the agent to make more robust decisions in real-world environments.
arXiv Detail & Related papers (2024-03-18T15:02:46Z) - DenseLight: Efficient Control for Large-scale Traffic Signals with Dense
Feedback [109.84667902348498]
Traffic Signal Control (TSC) aims to reduce the average travel time of vehicles in a road network.
Most prior TSC methods leverage deep reinforcement learning to search for a control policy.
We propose DenseLight, a novel RL-based TSC method that employs an unbiased reward function to provide dense feedback on policy effectiveness.
arXiv Detail & Related papers (2023-06-13T05:58:57Z) - iPLAN: Intent-Aware Planning in Heterogeneous Traffic via Distributed
Multi-Agent Reinforcement Learning [57.24340061741223]
We introduce a distributed multi-agent reinforcement learning (MARL) algorithm that can predict trajectories and intents in dense and heterogeneous traffic scenarios.
Our approach for intent-aware planning, iPLAN, allows agents to infer nearby drivers' intents solely from their local observations.
arXiv Detail & Related papers (2023-06-09T20:12:02Z) - Reinforcement Learning based Cyberattack Model for Adaptive Traffic
Signal Controller in Connected Transportation Systems [61.39400591328625]
In a connected transportation system, adaptive traffic signal controllers (ATSC) utilize real-time vehicle trajectory data received from vehicles to regulate green time.
This wirelessly connected ATSC increases cyber-attack surfaces and increases their vulnerability to various cyber-attack modes.
One such mode is a'sybil' attack in which an attacker creates fake vehicles in the network.
An RL agent is trained to learn an optimal rate of sybil vehicle injection to create congestion for an approach(s)
arXiv Detail & Related papers (2022-10-31T20:12:17Z) - Towards Robust On-Ramp Merging via Augmented Multimodal Reinforcement
Learning [9.48157144651867]
We present a novel approach for Robust on-ramp merge of CAVs via Augmented and Multi-modal Reinforcement Learning.
Specifically, we formulate the on-ramp merging problem as a Markov decision process (MDP) by taking driving safety, comfort driving behavior, and traffic efficiency into account.
To provide reliable merging maneuvers, we simultaneously leverage BSM and surveillance images for multi-modal observation.
arXiv Detail & Related papers (2022-07-21T16:34:57Z) - Explainable, automated urban interventions to improve pedestrian and
vehicle safety [0.8620335948752805]
This paper combines public data sources, large-scale street imagery and computer vision techniques to approach pedestrian and vehicle safety.
The steps involved in this pipeline include the adaptation and training of a Residual Convolutional Neural Network to determine a hazard index for each given urban scene.
The outcome of this computational approach is a fine-grained map of hazard levels across a city, and an identify interventions that might simultaneously improve pedestrian and vehicle safety.
arXiv Detail & Related papers (2021-10-22T09:17:39Z) - Learning Interaction-aware Guidance Policies for Motion Planning in
Dense Traffic Scenarios [8.484564880157148]
This paper presents a novel framework for interaction-aware motion planning in dense traffic scenarios.
We propose to learn, via deep Reinforcement Learning (RL), an interaction-aware policy providing global guidance about the cooperativeness of other vehicles.
The learned policy can reason and guide the local optimization-based planner with interactive behavior to pro-actively merge in dense traffic while remaining safe in case the other vehicles do not yield.
arXiv Detail & Related papers (2021-07-09T16:43:12Z) - End-to-End Intersection Handling using Multi-Agent Deep Reinforcement
Learning [63.56464608571663]
Navigating through intersections is one of the main challenging tasks for an autonomous vehicle.
In this work, we focus on the implementation of a system able to navigate through intersections where only traffic signs are provided.
We propose a multi-agent system using a continuous, model-free Deep Reinforcement Learning algorithm used to train a neural network for predicting both the acceleration and the steering angle at each time step.
arXiv Detail & Related papers (2021-04-28T07:54:40Z) - Emergent Road Rules In Multi-Agent Driving Environments [84.82583370858391]
We analyze what ingredients in driving environments cause the emergence of road rules.
We find that two crucial factors are noisy perception and agents' spatial density.
Our results add empirical support for the social road rules that countries worldwide have agreed on for safe, efficient driving.
arXiv Detail & Related papers (2020-11-21T09:43:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.