3D Line Mapping Revisited
- URL: http://arxiv.org/abs/2303.17504v1
- Date: Thu, 30 Mar 2023 16:14:48 GMT
- Title: 3D Line Mapping Revisited
- Authors: Shaohui Liu, Yifan Yu, R\'emi Pautrat, Marc Pollefeys, Viktor Larsson
- Abstract summary: LIMAP is a library for 3D line mapping that robustly and efficiently creates 3D line maps from multi-view imagery.
Our code integrates seamlessly with existing point-based Structure-from-Motion methods.
Our robust 3D line maps also open up new research directions.
- Score: 86.13455066577657
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In contrast to sparse keypoints, a handful of line segments can concisely
encode the high-level scene layout, as they often delineate the main structural
elements. In addition to offering strong geometric cues, they are also
omnipresent in urban landscapes and indoor scenes. Despite their apparent
advantages, current line-based reconstruction methods are far behind their
point-based counterparts. In this paper we aim to close the gap by introducing
LIMAP, a library for 3D line mapping that robustly and efficiently creates 3D
line maps from multi-view imagery. This is achieved through revisiting the
degeneracy problem of line triangulation, carefully crafted scoring and track
building, and exploiting structural priors such as line coincidence,
parallelism, and orthogonality. Our code integrates seamlessly with existing
point-based Structure-from-Motion methods and can leverage their 3D points to
further improve the line reconstruction. Furthermore, as a byproduct, the
method is able to recover 3D association graphs between lines and points /
vanishing points (VPs). In thorough experiments, we show that LIMAP
significantly outperforms existing approaches for 3D line mapping. Our robust
3D line maps also open up new research directions. We show two example
applications: visual localization and bundle adjustment, where integrating
lines alongside points yields the best results. Code is available at
https://github.com/cvg/limap.
Related papers
- 3D Neural Edge Reconstruction [61.10201396044153]
We introduce EMAP, a new method for learning 3D edge representations with a focus on both lines and curves.
Our method implicitly encodes 3D edge distance and direction in Unsigned Distance Functions (UDF) from multi-view edge maps.
On top of this neural representation, we propose an edge extraction algorithm that robustly abstracts 3D edges from the inferred edge points and their directions.
arXiv Detail & Related papers (2024-05-29T17:23:51Z) - Fully Geometric Panoramic Localization [16.200889977514862]
We introduce a lightweight and accurate localization method that only utilizes the geometry of 2D-3D lines.
Given a pre-captured 3D map, our approach localizes a panorama image, taking advantage of the holistic 360 view.
Our fully geometric approach does not involve extensive parameter tuning or neural network training, making it a practical algorithm that can be readily deployed in the real world.
arXiv Detail & Related papers (2024-03-29T01:07:20Z) - Representing 3D sparse map points and lines for camera relocalization [1.2974519529978974]
We show how a lightweight neural network can learn to represent both 3D point and line features.
In tests, our method secures a significant lead, marking the most considerable enhancement over state-of-the-art learning-based methodologies.
arXiv Detail & Related papers (2024-02-28T03:07:05Z) - LDL: Line Distance Functions for Panoramic Localization [22.46846444866008]
We introduce LDL, an algorithm that localizes a panorama to a 3D map using line segments.
Our method effectively observes the holistic distribution of lines within panoramic images and 3D maps.
arXiv Detail & Related papers (2023-08-27T02:57:07Z) - Flattening-Net: Deep Regular 2D Representation for 3D Point Cloud
Analysis [66.49788145564004]
We present an unsupervised deep neural architecture called Flattening-Net to represent irregular 3D point clouds of arbitrary geometry and topology.
Our methods perform favorably against the current state-of-the-art competitors.
arXiv Detail & Related papers (2022-12-17T15:05:25Z) - SketchSampler: Sketch-based 3D Reconstruction via View-dependent Depth
Sampling [75.957103837167]
Reconstructing a 3D shape based on a single sketch image is challenging due to the large domain gap between a sparse, irregular sketch and a regular, dense 3D shape.
Existing works try to employ the global feature extracted from sketch to directly predict the 3D coordinates, but they usually suffer from losing fine details that are not faithful to the input sketch.
arXiv Detail & Related papers (2022-08-14T16:37:51Z) - Neural 3D Scene Reconstruction with the Manhattan-world Assumption [58.90559966227361]
This paper addresses the challenge of reconstructing 3D indoor scenes from multi-view images.
Planar constraints can be conveniently integrated into the recent implicit neural representation-based reconstruction methods.
The proposed method outperforms previous methods by a large margin on 3D reconstruction quality.
arXiv Detail & Related papers (2022-05-05T17:59:55Z) - How Privacy-Preserving are Line Clouds? Recovering Scene Details from 3D
Lines [49.06411148698547]
This paper shows that a significant amount of information about the 3D scene geometry is preserved in line clouds.
Our approach is based on the observation that the closest points between lines can yield a good approximation to the original 3D points.
arXiv Detail & Related papers (2021-03-08T21:32:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.