Accurate Automatic 3D Annotation of Traffic Lights and Signs for Autonomous Driving
- URL: http://arxiv.org/abs/2409.12620v2
- Date: Mon, 23 Sep 2024 09:54:59 GMT
- Title: Accurate Automatic 3D Annotation of Traffic Lights and Signs for Autonomous Driving
- Authors: Sándor Kunsági-Máté, Levente Pető, Lehel Seres, Tamás Matuszka,
- Abstract summary: 3D detection of traffic management objects, such as traffic lights and road signs, is vital for self-driving cars.
This paper introduces a novel method for automatically generating 3D bounding box annotations for traffic lights and signs, effective up to a range of 200 meters.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: 3D detection of traffic management objects, such as traffic lights and road signs, is vital for self-driving cars, particularly for address-to-address navigation where vehicles encounter numerous intersections with these static objects. This paper introduces a novel method for automatically generating accurate and temporally consistent 3D bounding box annotations for traffic lights and signs, effective up to a range of 200 meters. These annotations are suitable for training real-time models used in self-driving cars, which need a large amount of training data. The proposed method relies only on RGB images with 2D bounding boxes of traffic management objects, which can be automatically obtained using an off-the-shelf image-space detector neural network, along with GNSS/INS data, eliminating the need for LiDAR point cloud data.
Related papers
- Railway LiDAR semantic segmentation based on intelligent semi-automated data annotation [0.48212500317840945]
We present an approach for a point-wise 3D semantic segmentation based on the 2DPass network architecture using scans and images jointly.
We also present a semi-automated intelligent data annotation approach, which we use to efficiently and accurately label the required dataset recorded on a railway track in Germany.
Our contributions are threefold: We annotate rail data including camera and LiDAR data from the railway environment, transfer label the raw LiDAR point clouds using an image segmentation network, and train a state-of-the-art 3D LiDAR semantic segmentation network efficiently leveraging active learning.
arXiv Detail & Related papers (2024-10-17T09:36:19Z) - Neural Semantic Map-Learning for Autonomous Vehicles [85.8425492858912]
We present a mapping system that fuses local submaps gathered from a fleet of vehicles at a central instance to produce a coherent map of the road environment.
Our method jointly aligns and merges the noisy and incomplete local submaps using a scene-specific Neural Signed Distance Field.
We leverage memory-efficient sparse feature-grids to scale to large areas and introduce a confidence score to model uncertainty in scene reconstruction.
arXiv Detail & Related papers (2024-10-10T10:10:03Z) - 3D Object Detection and High-Resolution Traffic Parameters Extraction
Using Low-Resolution LiDAR Data [14.142956899468922]
This study proposes an innovative framework that alleviates the need for multiple LiDAR systems and simplifies the laborious 3D annotation process.
Using the 2D bounding box detection and extracted height information, this study is able to generate 3D bounding boxes automatically without human intervention.
arXiv Detail & Related papers (2024-01-13T01:22:20Z) - Unsupervised Domain Adaptation for Self-Driving from Past Traversal
Features [69.47588461101925]
We propose a method to adapt 3D object detectors to new driving environments.
Our approach enhances LiDAR-based detection models using spatial quantized historical features.
Experiments on real-world datasets demonstrate significant improvements.
arXiv Detail & Related papers (2023-09-21T15:00:31Z) - Patterns of Vehicle Lights: Addressing Complexities in Curation and
Annotation of Camera-Based Vehicle Light Datasets and Metrics [0.0]
This paper explores the representation of vehicle lights in computer vision and its implications for various tasks in the field of autonomous driving.
Three important tasks in autonomous driving that can benefit from vehicle light detection are identified.
The challenges of collecting and annotating large datasets for training data-driven models are also addressed.
arXiv Detail & Related papers (2023-07-26T21:48:14Z) - Real-Time And Robust 3D Object Detection with Roadside LiDARs [20.10416681832639]
We design a 3D object detection model that can detect traffic participants in roadside LiDARs in real-time.
Our model uses an existing 3D detector as a baseline and improves its accuracy.
We make a significant contribution with our LiDAR-based 3D detector that can be used for smart city applications.
arXiv Detail & Related papers (2022-07-11T21:33:42Z) - Efficient Federated Learning with Spike Neural Networks for Traffic Sign
Recognition [70.306089187104]
We introduce powerful Spike Neural Networks (SNNs) into traffic sign recognition for energy-efficient and fast model training.
Numerical results indicate that the proposed federated SNN outperforms traditional federated convolutional neural networks in terms of accuracy, noise immunity, and energy efficiency as well.
arXiv Detail & Related papers (2022-05-28T03:11:48Z) - Traffic-Net: 3D Traffic Monitoring Using a Single Camera [1.1602089225841632]
We provide a practical platform for real-time traffic monitoring using a single CCTV traffic camera.
We adapt a custom YOLOv5 deep neural network model for vehicle/pedestrian detection and an enhanced SORT tracking algorithm.
We also develop a hierarchical traffic modelling solution based on short- and long-term temporal video data stream.
arXiv Detail & Related papers (2021-09-19T16:59:01Z) - Learnable Online Graph Representations for 3D Multi-Object Tracking [156.58876381318402]
We propose a unified and learning based approach to the 3D MOT problem.
We employ a Neural Message Passing network for data association that is fully trainable.
We show the merit of the proposed approach on the publicly available nuScenes dataset by achieving state-of-the-art performance of 65.6% AMOTA and 58% fewer ID-switches.
arXiv Detail & Related papers (2021-04-23T17:59:28Z) - Exploiting Playbacks in Unsupervised Domain Adaptation for 3D Object
Detection [55.12894776039135]
State-of-the-art 3D object detectors, based on deep learning, have shown promising accuracy but are prone to over-fit to domain idiosyncrasies.
We propose a novel learning approach that drastically reduces this gap by fine-tuning the detector on pseudo-labels in the target domain.
We show, on five autonomous driving datasets, that fine-tuning the detector on these pseudo-labels substantially reduces the domain gap to new driving environments.
arXiv Detail & Related papers (2021-03-26T01:18:11Z) - Fine-Grained Vehicle Perception via 3D Part-Guided Visual Data
Augmentation [77.60050239225086]
We propose an effective training data generation process by fitting a 3D car model with dynamic parts to vehicles in real images.
Our approach is fully automatic without any human interaction.
We present a multi-task network for VUS parsing and a multi-stream network for VHI parsing.
arXiv Detail & Related papers (2020-12-15T03:03:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.