Rendering and Tracking the Directional TSDF: Modeling Surface
Orientation for Coherent Maps
- URL: http://arxiv.org/abs/2108.08115v1
- Date: Wed, 18 Aug 2021 12:37:15 GMT
- Title: Rendering and Tracking the Directional TSDF: Modeling Surface
Orientation for Coherent Maps
- Authors: Malte Splietker and Sven Behnke
- Abstract summary: Directional Truncated Signed Distance Dense (DTSDF) is an augmentation of the regular TSDF.
We present methods for rendering depth- and color maps from the DTSDF, making it a true drop-in replacement for the regular TSDF in established trackers.
- Score: 28.502280038100167
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Dense real-time tracking and mapping from RGB-D images is an important tool
for many robotic applications, such as navigation or grasping. The recently
presented Directional Truncated Signed Distance Function (DTSDF) is an
augmentation of the regular TSDF and shows potential for more coherent maps and
improved tracking performance. In this work, we present methods for rendering
depth- and color maps from the DTSDF, making it a true drop-in replacement for
the regular TSDF in established trackers. We evaluate and show, that our method
increases re-usability of mapped scenes. Furthermore, we add color integration
which notably improves color-correctness at adjacent surfaces.
Related papers
- TK-Planes: Tiered K-Planes with High Dimensional Feature Vectors for Dynamic UAV-based Scenes [58.180556221044235]
We present a new approach to bridge the domain gap between synthetic and real-world data for un- manned aerial vehicle (UAV)-based perception.
Our formu- lation is designed for dynamic scenes, consisting of moving objects or human actions.
We evaluate its performance on challenging datasets, including Okutama Action and UG2.
arXiv Detail & Related papers (2024-05-04T21:55:33Z) - Weakly-Supervised 3D Reconstruction of Clothed Humans via Normal Maps [1.6462601662291156]
We present a novel deep learning-based approach to the 3D reconstruction of clothed humans using weak supervision via 2D normal maps.
Given a single RGB image or multiview images, our network infers a signed distance function (SDF) discretized on a tetrahedral mesh surrounding the body in a rest pose.
We demonstrate the efficacy of our approach for both network inference and 3D reconstruction.
arXiv Detail & Related papers (2023-11-27T18:06:35Z) - Rendering the Directional TSDF for Tracking and Multi-Sensor
Registration with Point-To-Plane Scale ICP [29.998917158604694]
Directional Truncated Signed Distance Dense (DTSDF) is an augmentation of the regular TSDF.
We present methods for rendering depth- and color images from the DTSDF.
We observe that our method improves tracking performance and increases re-usability of mapped scenes.
arXiv Detail & Related papers (2023-01-30T11:46:03Z) - PlaneSDF-based Change Detection for Long-term Dense Mapping [10.159737713094119]
We look into the problem of change detection based on a novel map representation, dubbed Plane Signed Distance Fields (PlaneSDF)
Given point clouds of the source and target scenes, we propose a three-step PlaneSDF-based change detection approach.
We evaluate our approach on both synthetic and real-world datasets and demonstrate its effectiveness via the task of changed object detection.
arXiv Detail & Related papers (2022-07-18T00:19:45Z) - iSDF: Real-Time Neural Signed Distance Fields for Robot Perception [64.80458128766254]
iSDF is a continuous learning system for real-time signed distance field reconstruction.
It produces more accurate reconstructions and better approximations of collision costs and gradients.
arXiv Detail & Related papers (2022-04-05T15:48:39Z) - Runway Extraction and Improved Mapping from Space Imagery [0.0]
We identify two generative adversarial networks (GANs) that translate reversibly between plausible runway maps and satellite imagery.
We experimentally show that the traditional grey-tan map palette is not a required training input but can be augmented by higher contrast mapping palettes.
We identify examples of faulty runway maps where the published satellite and mapped runways disagree but an automated update renders the correct map using GANs.
arXiv Detail & Related papers (2021-12-30T03:15:45Z) - Gradient-SDF: A Semi-Implicit Surface Representation for 3D
Reconstruction [53.315347543761426]
Gradient-SDF is a novel representation for 3D geometry that combines the advantages of implict and explicit representations.
By storing at every voxel both the signed distance field as well as its gradient vector field, we enhance the capability of implicit representations.
We show that (1) the Gradient-SDF allows us to perform direct SDF tracking from depth images, using efficient storage schemes like hash maps, and that (2) the Gradient-SDF representation enables us to perform photometric bundle adjustment directly in a voxel representation.
arXiv Detail & Related papers (2021-11-26T18:33:14Z) - TANDEM: Tracking and Dense Mapping in Real-time using Deep Multi-view
Stereo [55.30992853477754]
We present TANDEM, a real-time monocular tracking and dense framework.
For pose estimation, TANDEM performs photometric bundle adjustment based on a sliding window of alignments.
TANDEM shows state-of-the-art real-time 3D reconstruction performance.
arXiv Detail & Related papers (2021-11-14T19:01:02Z) - MFGNet: Dynamic Modality-Aware Filter Generation for RGB-T Tracking [72.65494220685525]
We propose a new dynamic modality-aware filter generation module (named MFGNet) to boost the message communication between visible and thermal data.
We generate dynamic modality-aware filters with two independent networks. The visible and thermal filters will be used to conduct a dynamic convolutional operation on their corresponding input feature maps respectively.
To address issues caused by heavy occlusion, fast motion, and out-of-view, we propose to conduct a joint local and global search by exploiting a new direction-aware target-driven attention mechanism.
arXiv Detail & Related papers (2021-07-22T03:10:51Z) - Jointly Modeling Motion and Appearance Cues for Robust RGB-T Tracking [85.333260415532]
We develop a novel late fusion method to infer the fusion weight maps of both RGB and thermal (T) modalities.
When the appearance cue is unreliable, we take motion cues into account to make the tracker robust.
Numerous results on three recent RGB-T tracking datasets show that the proposed tracker performs significantly better than other state-of-the-art algorithms.
arXiv Detail & Related papers (2020-07-04T08:11:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.