Map-aided annotation for pole base detection
- URL: http://arxiv.org/abs/2403.01868v1
- Date: Mon, 4 Mar 2024 09:23:11 GMT
- Title: Map-aided annotation for pole base detection
- Authors: Benjamin Missaoui (Heudiasyc), Maxime Noizet (Heudiasyc), Philippe Xu
(Heudiasyc)
- Abstract summary: In this paper, a 2D HD map is used to automatically annotate pole-like features in images.
In the absence of height information, the map features are represented as pole bases at the ground level.
We show how an object detector can be trained to detect a pole base.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: For autonomous navigation, high definition maps are a widely used source of
information. Pole-like features encoded in HD maps such as traffic signs,
traffic lights or street lights can be used as landmarks for localization. For
this purpose, they first need to be detected by the vehicle using its embedded
sensors. While geometric models can be used to process 3D point clouds
retrieved by lidar sensors, modern image-based approaches rely on deep neural
network and therefore heavily depend on annotated training data. In this paper,
a 2D HD map is used to automatically annotate pole-like features in images. In
the absence of height information, the map features are represented as pole
bases at the ground level. We show how an additional lidar sensor can be used
to filter out occluded features and refine the ground projection. We also
demonstrate how an object detector can be trained to detect a pole base. To
evaluate our methodology, it is first validated with data manually annotated
from semantic segmentation and then compared to our own automatically generated
annotated data recorded in the city of Compi{\`e}gne, France. Erratum: In the
original version [1], an error occurred in the accuracy evaluation of the
different models studied and the evaluation method applied on the detection
results was not clearly defined. In this revision, we offer a rectification to
this segment, presenting updated results, especially in terms of Mean Absolute
Errors (MAE).
Related papers
- TopoSD: Topology-Enhanced Lane Segment Perception with SDMap Prior [70.84644266024571]
We propose to train a perception model to "see" standard definition maps (SDMaps)
We encode SDMap elements into neural spatial map representations and instance tokens, and then incorporate such complementary features as prior information.
Based on the lane segment representation framework, the model simultaneously predicts lanes, centrelines and their topology.
arXiv Detail & Related papers (2024-11-22T06:13:42Z) - Neural Semantic Map-Learning for Autonomous Vehicles [85.8425492858912]
We present a mapping system that fuses local submaps gathered from a fleet of vehicles at a central instance to produce a coherent map of the road environment.
Our method jointly aligns and merges the noisy and incomplete local submaps using a scene-specific Neural Signed Distance Field.
We leverage memory-efficient sparse feature-grids to scale to large areas and introduce a confidence score to model uncertainty in scene reconstruction.
arXiv Detail & Related papers (2024-10-10T10:10:03Z) - Improving Online Lane Graph Extraction by Object-Lane Clustering [106.71926896061686]
We propose an architecture and loss formulation to improve the accuracy of local lane graph estimates.
The proposed method learns to assign the objects to centerlines by considering the centerlines as cluster centers.
We show that our method can achieve significant performance improvements by using the outputs of existing 3D object detection methods.
arXiv Detail & Related papers (2023-07-20T15:21:28Z) - DisPlacing Objects: Improving Dynamic Vehicle Detection via Visual Place
Recognition under Adverse Conditions [29.828201168816243]
We investigate whether a prior map can be leveraged to aid in the detection of dynamic objects in a scene without the need for a 3D map.
We contribute an algorithm which refines an initial set of candidate object detections and produces a refined subset of highly accurate detections using a prior map.
arXiv Detail & Related papers (2023-06-30T10:46:51Z) - ALSO: Automotive Lidar Self-supervision by Occupancy estimation [70.70557577874155]
We propose a new self-supervised method for pre-training the backbone of deep perception models operating on point clouds.
The core idea is to train the model on a pretext task which is the reconstruction of the surface on which the 3D points are sampled.
The intuition is that if the network is able to reconstruct the scene surface, given only sparse input points, then it probably also captures some fragments of semantic information.
arXiv Detail & Related papers (2022-12-12T13:10:19Z) - Robust Object Detection in Remote Sensing Imagery with Noisy and Sparse
Geo-Annotations (Full Version) [4.493174773769076]
In this paper, we present a novel approach for training object detectors with extremely noisy and incomplete annotations.
Our method is based on a teacher-student learning framework and a correction module accounting for imprecise and missing annotations.
We demonstrate that our approach improves standard detectors by 37.1% $AP_50$ on a noisy real-world remote-sensing dataset.
arXiv Detail & Related papers (2022-10-24T07:25:31Z) - OccAM's Laser: Occlusion-based Attribution Maps for 3D Object Detectors
on LiDAR Data [8.486063950768694]
We propose a method to generate attribution maps for 3D object detection in LiDAR point clouds.
These maps indicate the importance of each 3D point in predicting the specific objects.
We show a detailed evaluation of the attribution maps and demonstrate that they are interpretable and highly informative.
arXiv Detail & Related papers (2022-04-13T18:00:30Z) - Continuous Self-Localization on Aerial Images Using Visual and Lidar
Sensors [25.87104194833264]
We propose a novel method for geo-tracking in outdoor environments by registering a vehicle's sensor information with aerial imagery of an unseen target region.
We train a model in a metric learning setting to extract visual features from ground and aerial images.
Our method is the first to utilize on-board cameras in an end-to-end differentiable model for metric self-localization on unseen orthophotos.
arXiv Detail & Related papers (2022-03-07T12:25:44Z) - Semantic Image Alignment for Vehicle Localization [111.59616433224662]
We present a novel approach to vehicle localization in dense semantic maps using semantic segmentation from a monocular camera.
In contrast to existing visual localization approaches, the system does not require additional keypoint features, handcrafted localization landmark extractors or expensive LiDAR sensors.
arXiv Detail & Related papers (2021-10-08T14:40:15Z) - MapFusion: A General Framework for 3D Object Detection with HDMaps [17.482961825285013]
We propose MapFusion to integrate the map information into modern 3D object detector pipelines.
By fusing the map information, we can achieve 1.27 to 2.79 points improvements for mean Average Precision (mAP) on three strong 3d object detection baselines.
arXiv Detail & Related papers (2021-03-10T08:36:59Z) - Rethinking Localization Map: Towards Accurate Object Perception with
Self-Enhancement Maps [78.2581910688094]
This work introduces a novel self-enhancement method to harvest accurate object localization maps and object boundaries with only category labels as supervision.
In particular, the proposed Self-Enhancement Maps achieve the state-of-the-art localization accuracy of 54.88% on ILSVRC.
arXiv Detail & Related papers (2020-06-09T12:35:55Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.