Interacted Planes Reveal 3D Line Mapping
- URL: http://arxiv.org/abs/2602.01296v1
- Date: Sun, 01 Feb 2026 15:52:55 GMT
- Title: Interacted Planes Reveal 3D Line Mapping
- Authors: Zeran Ke, Bin Tan, Gui-Song Xia, Yujun Shen, Nan Xue,
- Abstract summary: LiP-Map is a line-plane joint optimization framework for 3D line mapping.<n>On more than 100 scenes from ScanNetV2, ScanNet++, Hypersim, 7Scenes, and Tanks&Temple, LiP-Map improves both accuracy and completeness over state-of-the-art methods.
- Score: 73.60851338875962
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: 3D line mapping from multi-view RGB images provides a compact and structured visual representation of scenes. We study the problem from a physical and topological perspective: a 3D line most naturally emerges as the edge of a finite 3D planar patch. We present LiP-Map, a line-plane joint optimization framework that explicitly models learnable line and planar primitives. This coupling enables accurate and detailed 3D line mapping while maintaining strong efficiency (typically completing a reconstruction in 3 to 5 minutes per scene). LiP-Map pioneers the integration of planar topology into 3D line mapping, not by imposing pairwise coplanarity constraints but by explicitly constructing interactions between plane and line primitives, thus offering a principled route toward structured reconstruction in man-made environments. On more than 100 scenes from ScanNetV2, ScanNet++, Hypersim, 7Scenes, and Tanks\&Temple, LiP-Map improves both accuracy and completeness over state-of-the-art methods. Beyond line mapping quality, LiP-Map significantly advances line-assisted visual localization, establishing strong performance on 7Scenes. Our code is released at https://github.com/calmke/LiPMAP for reproducible research.
Related papers
- MapRF: Weakly Supervised Online HD Map Construction via NeRF-Guided Self-Training [6.6099504578472414]
MapRF is a weakly supervised framework that learns to construct 3D maps using only 2D image labels.<n>To mitigate error accumulation during self-training, we propose a Map-to-Ray Matching strategy.
arXiv Detail & Related papers (2025-11-24T07:23:10Z) - PLANA3R: Zero-shot Metric Planar 3D Reconstruction via Feed-Forward Planar Splatting [56.188624157291024]
We introduce PLANA3R, a pose-free framework for metric Planar 3D Reconstruction from unposed two-view images.<n>Unlike prior feedforward methods that require 3D plane annotations during training, PLANA3R learns planar 3D structures without explicit plane supervision.<n>We validate PLANA3R on multiple indoor-scene datasets with metric supervision and demonstrate strong generalization to out-of-domain indoor environments.
arXiv Detail & Related papers (2025-10-21T15:15:33Z) - Revisiting Depth Representations for Feed-Forward 3D Gaussian Splatting [57.43483622778394]
We introduce PM-Loss, a novel regularization loss based on a pointmap predicted by a pre-trained transformer.<n>With the improved depth map, our method significantly improves the feed-forward 3DGS across various architectures and scenes.
arXiv Detail & Related papers (2025-06-05T17:58:23Z) - TSP3D: Text-guided Sparse Voxel Pruning for Efficient 3D Visual Grounding [74.033589504806]
We propose an efficient multi-level convolution architecture for 3D visual grounding.<n>Our method achieves top inference speed and surpasses previous fastest method by 100% FPS.
arXiv Detail & Related papers (2025-02-14T18:59:59Z) - LineGS : 3D Line Segment Representation on 3D Gaussian Splatting [0.0]
LineGS is a novel method that combines geometry-guided 3D line reconstruction with a 3D Gaussian splatting model.<n>The results show significant improvements in both geometric accuracy and model compactness compared to baseline methods.
arXiv Detail & Related papers (2024-11-30T13:29:36Z) - 3D LiDAR Mapping in Dynamic Environments Using a 4D Implicit Neural Representation [33.92758288570465]
Building accurate maps is a key building block to enable reliable localization, planning, and navigation of autonomous vehicles.
We propose encoding the 4D scene into a novel implicit neural map representation.
Our method is capable of removing the dynamic part of the input point clouds while reconstructing accurate and complete 3D maps.
arXiv Detail & Related papers (2024-05-06T11:46:04Z) - Representing 3D sparse map points and lines for camera relocalization [1.2974519529978974]
We show how a lightweight neural network can learn to represent both 3D point and line features.
In tests, our method secures a significant lead, marking the most considerable enhancement over state-of-the-art learning-based methodologies.
arXiv Detail & Related papers (2024-02-28T03:07:05Z) - 3D Line Mapping Revisited [86.13455066577657]
LIMAP is a library for 3D line mapping that robustly and efficiently creates 3D line maps from multi-view imagery.
Our code integrates seamlessly with existing point-based Structure-from-Motion methods.
Our robust 3D line maps also open up new research directions.
arXiv Detail & Related papers (2023-03-30T16:14:48Z) - SHINE-Mapping: Large-Scale 3D Mapping Using Sparse Hierarchical Implicit
Neural Representations [37.733802382489515]
This paper addresses the problems of achieving large-scale 3D reconstructions with implicit representations using 3D LiDAR measurements.
We learn and store implicit features through an octree-based hierarchical structure, which is sparse and sparse.
Our experiments show that our 3D reconstructions are more accurate, complete, and memory-efficient than current state-of-the-art 3D mapping methods.
arXiv Detail & Related papers (2022-10-05T14:38:49Z) - Neural 3D Scene Reconstruction with the Manhattan-world Assumption [58.90559966227361]
This paper addresses the challenge of reconstructing 3D indoor scenes from multi-view images.
Planar constraints can be conveniently integrated into the recent implicit neural representation-based reconstruction methods.
The proposed method outperforms previous methods by a large margin on 3D reconstruction quality.
arXiv Detail & Related papers (2022-05-05T17:59:55Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.