Fiducial Tag Localization on a 3D LiDAR Prior Map
- URL: http://arxiv.org/abs/2209.01072v3
- Date: Wed, 5 Jun 2024 17:12:32 GMT
- Title: Fiducial Tag Localization on a 3D LiDAR Prior Map
- Authors: Yibo Liu, Jinjun Shan, Hunter Schofield,
- Abstract summary: The existing LiDAR fiducial tag localization methods do not apply to 3D LiDAR maps.
We develop a novel approach to directly localize fiducial tags on a 3D LiDAR prior map.
We conduct both qualitative and quantitative experiments to demonstrate that our approach is the first method applicable to localize tags on a 3D LiDAR map.
- Score: 0.6554326244334868
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The LiDAR fiducial tag, akin to the well-known AprilTag used in camera applications, serves as a convenient resource to impart artificial features to the LiDAR sensor, facilitating robotics applications. Unfortunately, the existing LiDAR fiducial tag localization methods do not apply to 3D LiDAR maps while resolving this problem is beneficial to LiDAR-based relocalization and navigation. In this paper, we develop a novel approach to directly localize fiducial tags on a 3D LiDAR prior map, returning the tag poses (labeled by ID number) and vertex locations (labeled by index) w.r.t. the global coordinate system of the map. In particular, considering that fiducial tags are thin sheet objects indistinguishable from the attached planes, we design a new pipeline that gradually analyzes the 3D point cloud of the map from the intensity and geometry perspectives, extracting potential tag-containing point clusters. Then, we introduce an intermediate-plane-based method to further check if each potential cluster has a tag and compute the vertex locations and tag pose if found. We conduct both qualitative and quantitative experiments to demonstrate that our approach is the first method applicable to localize tags on a 3D LiDAR map while achieving better accuracy compared to previous methods. The open-source implementation of this work is available at: https://github.com/York-SDCNLab/Marker-Detection-General.
Related papers
- Map-aided annotation for pole base detection [0.0]
In this paper, a 2D HD map is used to automatically annotate pole-like features in images.
In the absence of height information, the map features are represented as pole bases at the ground level.
We show how an object detector can be trained to detect a pole base.
arXiv Detail & Related papers (2024-03-04T09:23:11Z) - RaLF: Flow-based Global and Metric Radar Localization in LiDAR Maps [8.625083692154414]
We propose RaLF, a novel deep neural network-based approach for localizing radar scans in a LiDAR map of the environment.
RaLF is composed of radar and LiDAR feature encoders, a place recognition head that generates global descriptors, and a metric localization head that predicts the 3-DoF transformation between the radar scan and the map.
We extensively evaluate our approach on multiple real-world driving datasets and show that RaLF achieves state-of-the-art performance for both place recognition and metric localization.
arXiv Detail & Related papers (2023-09-18T15:37:01Z) - V-DETR: DETR with Vertex Relative Position Encoding for 3D Object
Detection [73.37781484123536]
We introduce a highly performant 3D object detector for point clouds using the DETR framework.
To address the limitation, we introduce a novel 3D Relative Position (3DV-RPE) method.
We show exceptional results on the challenging ScanNetV2 benchmark.
arXiv Detail & Related papers (2023-08-08T17:14:14Z) - Weakly Supervised Monocular 3D Object Detection using Multi-View
Projection and Direction Consistency [78.76508318592552]
Monocular 3D object detection has become a mainstream approach in automatic driving for its easy application.
Most current methods still rely on 3D point cloud data for labeling the ground truths used in the training phase.
We propose a new weakly supervised monocular 3D objection detection method, which can train the model with only 2D labels marked on images.
arXiv Detail & Related papers (2023-03-15T15:14:00Z) - GraffMatch: Global Matching of 3D Lines and Planes for Wide Baseline
LiDAR Registration [41.00550745153015]
Using geometric landmarks like lines and planes can increase navigation accuracy and decrease map storage requirements.
However, landmark-based registration for applications like loop closure detection is challenging because a reliable initial guess is not available.
We adopt the affine Grassmannian manifold to represent 3D lines and planes and prove that the distance between two landmarks is invariant to rotation and translation.
arXiv Detail & Related papers (2022-12-24T15:02:15Z) - Label-Guided Auxiliary Training Improves 3D Object Detector [32.96310946612949]
We propose a Label-Guided auxiliary training method for 3D object detection (LG3D)
Our proposed LG3D improves VoteNet by 2.5% and 3.1% mAP on the SUN RGB-D and ScanNetV2 datasets.
arXiv Detail & Related papers (2022-07-24T14:22:21Z) - GLENet: Boosting 3D Object Detectors with Generative Label Uncertainty Estimation [70.75100533512021]
In this paper, we formulate the label uncertainty problem as the diversity of potentially plausible bounding boxes of objects.
We propose GLENet, a generative framework adapted from conditional variational autoencoders, to model the one-to-many relationship between a typical 3D object and its potential ground-truth bounding boxes with latent variables.
The label uncertainty generated by GLENet is a plug-and-play module and can be conveniently integrated into existing deep 3D detectors.
arXiv Detail & Related papers (2022-07-06T06:26:17Z) - Progressive Coordinate Transforms for Monocular 3D Object Detection [52.00071336733109]
We propose a novel and lightweight approach, dubbed em Progressive Coordinate Transforms (PCT) to facilitate learning coordinate representations.
In this paper, we propose a novel and lightweight approach, dubbed em Progressive Coordinate Transforms (PCT) to facilitate learning coordinate representations.
arXiv Detail & Related papers (2021-08-12T15:22:33Z) - Unsupervised Object Detection with LiDAR Clues [70.73881791310495]
We present the first practical method for unsupervised object detection with the aid of LiDAR clues.
In our approach, candidate object segments based on 3D point clouds are firstly generated.
Then, an iterative segment labeling process is conducted to assign segment labels and to train a segment labeling network.
The labeling process is carefully designed so as to mitigate the issue of long-tailed and open-ended distribution.
arXiv Detail & Related papers (2020-11-25T18:59:54Z) - Complete & Label: A Domain Adaptation Approach to Semantic Segmentation
of LiDAR Point Clouds [49.47017280475232]
We study an unsupervised domain adaptation problem for the semantic labeling of 3D point clouds.
We take a Complete and Label approach to recover the underlying surfaces before passing them to a segmentation network.
The recovered 3D surfaces serve as a canonical domain, from which semantic labels can transfer across different LiDAR sensors.
arXiv Detail & Related papers (2020-07-16T17:42:05Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.