SignEye: Traffic Sign Interpretation from Vehicle First-Person View
- URL: http://arxiv.org/abs/2411.11507v1
- Date: Mon, 18 Nov 2024 12:12:33 GMT
- Title: SignEye: Traffic Sign Interpretation from Vehicle First-Person View
- Authors: Chuang Yang, Xu Han, Tao Han, Yuejiao SU, Junyu Gao, Hongyuan Zhang, Yi Wang, Lap-Pui Chau,
- Abstract summary: Traffic signs play a key role in assisting autonomous driving systems (ADS) by enabling the assessment of vehicle behavior in compliance with traffic regulations.
We introduce a new task: traffic sign interpretation from the vehicle's first-person view, referred to as TSI-FPV.
We also develop a traffic guidance assistant (TGA) scenario application to re-explore the role of traffic signs in ADS.
- Score: 43.49612694851131
- License:
- Abstract: Traffic signs play a key role in assisting autonomous driving systems (ADS) by enabling the assessment of vehicle behavior in compliance with traffic regulations and providing navigation instructions. However, current works are limited to basic sign understanding without considering the egocentric vehicle's spatial position, which fails to support further regulation assessment and direction navigation. Following the above issues, we introduce a new task: traffic sign interpretation from the vehicle's first-person view, referred to as TSI-FPV. Meanwhile, we develop a traffic guidance assistant (TGA) scenario application to re-explore the role of traffic signs in ADS as a complement to popular autonomous technologies (such as obstacle perception). Notably, TGA is not a replacement for electronic map navigation; rather, TGA can be an automatic tool for updating it and complementing it in situations such as offline conditions or temporary sign adjustments. Lastly, a spatial and semantic logic-aware stepwise reasoning pipeline (SignEye) is constructed to achieve the TSI-FPV and TGA, and an application-specific dataset (Traffic-CN) is built. Experiments show that TSI-FPV and TGA are achievable via our SignEye trained on Traffic-CN. The results also demonstrate that the TGA can provide complementary information to ADS beyond existing popular autonomous technologies.
Related papers
- Driving by the Rules: A Benchmark for Integrating Traffic Sign Regulations into Vectorized HD Map [15.57801519192153]
We introduce MapDR, a novel dataset designed for the extraction of Driving Rules from traffic signs.
MapDR features over 10,000 annotated video clips that capture the intricate correlation between traffic sign regulations and lanes.
arXiv Detail & Related papers (2024-10-31T09:53:21Z) - A Holistic Framework Towards Vision-based Traffic Signal Control with
Microscopic Simulation [53.39174966020085]
Traffic signal control (TSC) is crucial for reducing traffic congestion that leads to smoother traffic flow, reduced idling time, and mitigated CO2 emissions.
In this study, we explore the computer vision approach for TSC that modulates on-road traffic flows through visual observation.
We introduce a holistic traffic simulation framework called TrafficDojo towards vision-based TSC and its benchmarking.
arXiv Detail & Related papers (2024-03-11T16:42:29Z) - REDriver: Runtime Enforcement for Autonomous Vehicles [6.97499033700151]
We propose REDriver, a general and modular approach to runtime enforcement of autonomous driving systems.
ReDriver monitors the planned trajectory of the ADS based on a quantitative semantics of STL.
It uses a gradient-driven algorithm to repair the trajectory when a violation of the specification is likely.
arXiv Detail & Related papers (2024-01-04T13:08:38Z) - Traffic Sign Interpretation in Real Road Scene [18.961971178824715]
We propose a traffic sign interpretation (TSI) task, which aims to interpret global semantic interrelated traffic signs into a natural language.
The dataset consists of real road scene images, which are captured from the highway and the urban way in China from a driver's perspective.
Experiments on TSI-CN demonstrate that the TSI task is achievable and the TSI architecture can interpret traffic signs from scenes successfully.
arXiv Detail & Related papers (2023-11-17T02:30:36Z) - MSight: An Edge-Cloud Infrastructure-based Perception System for
Connected Automated Vehicles [58.461077944514564]
This paper presents MSight, a cutting-edge roadside perception system specifically designed for automated vehicles.
MSight offers real-time vehicle detection, localization, tracking, and short-term trajectory prediction.
Evaluations underscore the system's capability to uphold lane-level accuracy with minimal latency.
arXiv Detail & Related papers (2023-10-08T21:32:30Z) - ETPNav: Evolving Topological Planning for Vision-Language Navigation in
Continuous Environments [56.194988818341976]
Vision-language navigation is a task that requires an agent to follow instructions to navigate in environments.
We propose ETPNav, which focuses on two critical skills: 1) the capability to abstract environments and generate long-range navigation plans, and 2) the ability of obstacle-avoiding control in continuous environments.
ETPNav yields more than 10% and 20% improvements over prior state-of-the-art on R2R-CE and RxR-CE datasets.
arXiv Detail & Related papers (2023-04-06T13:07:17Z) - Polyline Based Generative Navigable Space Segmentation for Autonomous
Visual Navigation [57.3062528453841]
We propose a representation-learning-based framework to enable robots to learn the navigable space segmentation in an unsupervised manner.
We show that the proposed PSV-Nets can learn the visual navigable space with high accuracy, even without any single label.
arXiv Detail & Related papers (2021-10-29T19:50:48Z) - End-to-End Intersection Handling using Multi-Agent Deep Reinforcement
Learning [63.56464608571663]
Navigating through intersections is one of the main challenging tasks for an autonomous vehicle.
In this work, we focus on the implementation of a system able to navigate through intersections where only traffic signs are provided.
We propose a multi-agent system using a continuous, model-free Deep Reinforcement Learning algorithm used to train a neural network for predicting both the acceleration and the steering angle at each time step.
arXiv Detail & Related papers (2021-04-28T07:54:40Z) - Autonomous Navigation through intersections with Graph
ConvolutionalNetworks and Conditional Imitation Learning for Self-driving
Cars [10.080958939027363]
In autonomous driving, navigation through unsignaled intersections is a challenging task.
We propose a novel branched network G-CIL for the navigation policy learning.
Our end-to-end trainable neural network outperforms the baselines with higher success rate and shorter navigation time.
arXiv Detail & Related papers (2021-02-01T07:33:12Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.