Rendering the Directional TSDF for Tracking and Multi-Sensor
Registration with Point-To-Plane Scale ICP
- URL: http://arxiv.org/abs/2301.12796v1
- Date: Mon, 30 Jan 2023 11:46:03 GMT
- Title: Rendering the Directional TSDF for Tracking and Multi-Sensor
Registration with Point-To-Plane Scale ICP
- Authors: Malte Splietker and Sven Behnke
- Abstract summary: Directional Truncated Signed Distance Dense (DTSDF) is an augmentation of the regular TSDF.
We present methods for rendering depth- and color images from the DTSDF.
We observe that our method improves tracking performance and increases re-usability of mapped scenes.
- Score: 29.998917158604694
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Dense real-time tracking and mapping from RGB-D images is an important tool
for many robotic applications, such as navigation and manipulation. The
recently presented Directional Truncated Signed Distance Function (DTSDF) is an
augmentation of the regular TSDF that shows potential for more coherent maps
and improved tracking performance. In this work, we present methods for
rendering depth- and color images from the DTSDF, making it a true drop-in
replacement for the regular TSDF in established trackers. We evaluate the
algorithm on well-established datasets and observe that our method improves
tracking performance and increases re-usability of mapped scenes. Furthermore,
we add color integration which notably improves color-correctness at adjacent
surfaces. Our novel formulation of combined ICP with frame-to-keyframe
photometric error minimization further improves tracking results. Lastly, we
introduce Sim3 point-to-plane ICP for refining pose priors in a multi-sensor
scenario with different scale factors.
Related papers
- TK-Planes: Tiered K-Planes with High Dimensional Feature Vectors for Dynamic UAV-based Scenes [58.180556221044235]
We present a new approach to bridge the domain gap between synthetic and real-world data for un- manned aerial vehicle (UAV)-based perception.
Our formu- lation is designed for dynamic scenes, consisting of moving objects or human actions.
We evaluate its performance on challenging datasets, including Okutama Action and UG2.
arXiv Detail & Related papers (2024-05-04T21:55:33Z) - $ν$-DBA: Neural Implicit Dense Bundle Adjustment Enables Image-Only Driving Scene Reconstruction [31.64067619807023]
$nu$-DBA implements geometric dense bundle adjustment (DBA) using 3D neural implicit surfaces for map parametrization.
We fine-tune the optical flow model with per-scene self-supervision to further improve the quality of the dense mapping.
arXiv Detail & Related papers (2024-04-29T05:29:26Z) - MM3DGS SLAM: Multi-modal 3D Gaussian Splatting for SLAM Using Vision, Depth, and Inertial Measurements [59.70107451308687]
We show for the first time that using 3D Gaussians for map representation with unposed camera images and inertial measurements can enable accurate SLAM.
Our method, MM3DGS, addresses the limitations of prior rendering by enabling faster scale awareness, and improved trajectory tracking.
We also release a multi-modal dataset, UT-MM, collected from a mobile robot equipped with a camera and an inertial measurement unit.
arXiv Detail & Related papers (2024-04-01T04:57:41Z) - CVT-xRF: Contrastive In-Voxel Transformer for 3D Consistent Radiance Fields from Sparse Inputs [65.80187860906115]
We propose a novel approach to improve NeRF's performance with sparse inputs.
We first adopt a voxel-based ray sampling strategy to ensure that the sampled rays intersect with a certain voxel in 3D space.
We then randomly sample additional points within the voxel and apply a Transformer to infer the properties of other points on each ray, which are then incorporated into the volume rendering.
arXiv Detail & Related papers (2024-03-25T15:56:17Z) - Differentiable Registration of Images and LiDAR Point Clouds with
VoxelPoint-to-Pixel Matching [58.10418136917358]
Cross-modality registration between 2D images from cameras and 3D point clouds from LiDARs is a crucial task in computer vision and robotic training.
Previous methods estimate 2D-3D correspondences by matching point and pixel patterns learned by neural networks.
We learn a structured cross-modality matching solver to represent 3D features via a different latent pixel space.
arXiv Detail & Related papers (2023-12-07T05:46:10Z) - DirectTracker: 3D Multi-Object Tracking Using Direct Image Alignment and
Photometric Bundle Adjustment [41.27664827586102]
Direct methods have shown excellent performance in the applications of visual odometry and SLAM.
We propose a framework that effectively combines direct image alignment for the short-term tracking and sliding-window photometric bundle adjustment for 3D object detection.
arXiv Detail & Related papers (2022-09-29T17:40:22Z) - iSDF: Real-Time Neural Signed Distance Fields for Robot Perception [64.80458128766254]
iSDF is a continuous learning system for real-time signed distance field reconstruction.
It produces more accurate reconstructions and better approximations of collision costs and gradients.
arXiv Detail & Related papers (2022-04-05T15:48:39Z) - Large-Scale 3D Semantic Reconstruction for Automated Driving Vehicles
with Adaptive Truncated Signed Distance Function [9.414880946870916]
We propose a novel 3D reconstruction and semantic mapping system using LiDAR and camera sensors.
An Adaptive Truncated Function is introduced to describe surfaces implicitly, which can deal with different LiDAR point sparsities.
An optimal image patch selection strategy is proposed to estimate the optimal semantic class for each triangle mesh.
arXiv Detail & Related papers (2022-02-28T15:11:25Z) - TANDEM: Tracking and Dense Mapping in Real-time using Deep Multi-view
Stereo [55.30992853477754]
We present TANDEM, a real-time monocular tracking and dense framework.
For pose estimation, TANDEM performs photometric bundle adjustment based on a sliding window of alignments.
TANDEM shows state-of-the-art real-time 3D reconstruction performance.
arXiv Detail & Related papers (2021-11-14T19:01:02Z) - Rendering and Tracking the Directional TSDF: Modeling Surface
Orientation for Coherent Maps [28.502280038100167]
Directional Truncated Signed Distance Dense (DTSDF) is an augmentation of the regular TSDF.
We present methods for rendering depth- and color maps from the DTSDF, making it a true drop-in replacement for the regular TSDF in established trackers.
arXiv Detail & Related papers (2021-08-18T12:37:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.