Optimal Target Shape for LiDAR Pose Estimation
- URL: http://arxiv.org/abs/2109.01181v2
- Date: Mon, 6 Sep 2021 15:16:39 GMT
- Title: Optimal Target Shape for LiDAR Pose Estimation
- Authors: Jiunn-Kai Huang, William Clark, and Jessy W. Grizzle
- Abstract summary: Targets are essential in problems such as object tracking in cluttered or textureless environments.
symmetric shapes lead to pose ambiguity when using sparse sensor data.
This paper introduces the concept of optimizing target shape to remove pose ambiguity for LiDAR point clouds.
- Score: 1.9048510647598205
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Targets are essential in problems such as object tracking in cluttered or
textureless environments, camera (and multi-sensor) calibration tasks, and
simultaneous localization and mapping (SLAM). Target shapes for these tasks
typically are symmetric (square, rectangular, or circular) and work well for
structured, dense sensor data such as pixel arrays (i.e., image). However,
symmetric shapes lead to pose ambiguity when using sparse sensor data such as
LiDAR point clouds and suffer from the quantization uncertainty of the LiDAR.
This paper introduces the concept of optimizing target shape to remove pose
ambiguity for LiDAR point clouds. A target is designed to induce large
gradients at edge points under rotation and translation relative to the LiDAR
to ameliorate the quantization uncertainty associated with point cloud
sparseness. Moreover, given a target shape, we present a means that leverages
the target's geometry to estimate the target's vertices while globally
estimating the pose. Both the simulation and the experimental results (verified
by a motion capture system) confirm that by using the optimal shape and the
global solver, we achieve centimeter error in translation and a few degrees in
rotation even when a partially illuminated target is placed 30 meters away. All
the implementations and datasets are available at
https://github.com/UMich-BipedLab/optimal_shape_global_pose_estimation.
Related papers
- ESCAPE: Equivariant Shape Completion via Anchor Point Encoding [79.59829525431238]
We introduce ESCAPE, a framework designed to achieve rotation-equivariant shape completion.
ESCAPE employs a distinctive encoding strategy by selecting anchor points from a shape and representing all points as a distance to all anchor points.
ESCAPE achieves robust, high-quality reconstructions across arbitrary rotations and translations.
arXiv Detail & Related papers (2024-12-01T20:05:14Z) - Robust 3D Tracking with Quality-Aware Shape Completion [67.9748164949519]
We propose a synthetic target representation composed of dense and complete point clouds depicting the target shape precisely by shape completion for robust 3D tracking.
Specifically, we design a voxelized 3D tracking framework with shape completion, in which we propose a quality-aware shape completion mechanism to alleviate the adverse effect of noisy historical predictions.
arXiv Detail & Related papers (2023-12-17T04:50:24Z) - Real-Time Simultaneous Localization and Mapping with LiDAR intensity [9.374695605941627]
We propose a novel real-time LiDAR intensity image-based simultaneous localization and mapping method.
Our method can run in real time with high accuracy and works well with illumination changes, low-texture, and unstructured environments.
arXiv Detail & Related papers (2023-01-23T03:59:48Z) - Generative Category-Level Shape and Pose Estimation with Semantic
Primitives [27.692997522812615]
We propose a novel framework for category-level object shape and pose estimation from a single RGB-D image.
To handle the intra-category variation, we adopt a semantic primitive representation that encodes diverse shapes into a unified latent space.
We show that the proposed method achieves SOTA pose estimation performance and better generalization in the real-world dataset.
arXiv Detail & Related papers (2022-10-03T17:51:54Z) - RBP-Pose: Residual Bounding Box Projection for Category-Level Pose
Estimation [103.74918834553247]
Category-level object pose estimation aims to predict the 6D pose as well as the 3D metric size of arbitrary objects from a known set of categories.
Recent methods harness shape prior adaptation to map the observed point cloud into the canonical space and apply Umeyama algorithm to recover the pose and size.
We propose a novel geometry-guided Residual Object Bounding Box Projection network RBP-Pose that jointly predicts object pose and residual vectors.
arXiv Detail & Related papers (2022-07-30T14:45:20Z) - Efficient 3D Deep LiDAR Odometry [16.388259779644553]
An efficient 3D point cloud learning architecture, named PWCLO-Net, is first proposed in this paper.
The entire architecture is holistically optimized end-to-end to achieve adaptive learning of cost volume and mask.
arXiv Detail & Related papers (2021-11-03T11:09:49Z) - Category-Level Metric Scale Object Shape and Pose Estimation [73.92460712829188]
We propose a framework that jointly estimates a metric scale shape and pose from a single RGB image.
We validated our method on both synthetic and real-world datasets to evaluate category-level object pose and shape.
arXiv Detail & Related papers (2021-09-01T12:16:46Z) - Progressive Coordinate Transforms for Monocular 3D Object Detection [52.00071336733109]
We propose a novel and lightweight approach, dubbed em Progressive Coordinate Transforms (PCT) to facilitate learning coordinate representations.
In this paper, we propose a novel and lightweight approach, dubbed em Progressive Coordinate Transforms (PCT) to facilitate learning coordinate representations.
arXiv Detail & Related papers (2021-08-12T15:22:33Z) - Robust 6D Object Pose Estimation by Learning RGB-D Features [59.580366107770764]
We propose a novel discrete-continuous formulation for rotation regression to resolve this local-optimum problem.
We uniformly sample rotation anchors in SO(3), and predict a constrained deviation from each anchor to the target, as well as uncertainty scores for selecting the best prediction.
Experiments on two benchmarks: LINEMOD and YCB-Video, show that the proposed method outperforms state-of-the-art approaches.
arXiv Detail & Related papers (2020-02-29T06:24:55Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.