Dense-SfM: Structure from Motion with Dense Consistent Matching
- URL: http://arxiv.org/abs/2501.14277v1
- Date: Fri, 24 Jan 2025 06:45:12 GMT
- Title: Dense-SfM: Structure from Motion with Dense Consistent Matching
- Authors: JongMin Lee, Sungjoo Yoo,
- Abstract summary: We present Dense-SfM, a novel framework for dense and accurate 3D reconstruction from multi-view images.
Dense-SfM integrates dense matching with a Gaussian Splatting (GS) based track extension which gives more consistent, longer feature tracks.
Dense-SfM offers significant improvements in accuracy and density over state-of-the-art methods.
- Score: 10.24418219366936
- License:
- Abstract: We present Dense-SfM, a novel Structure from Motion (SfM) framework designed for dense and accurate 3D reconstruction from multi-view images. Sparse keypoint matching, which traditional SfM methods often rely on, limits both accuracy and point density, especially in texture-less areas. Dense-SfM addresses this limitation by integrating dense matching with a Gaussian Splatting (GS) based track extension which gives more consistent, longer feature tracks. To further improve reconstruction accuracy, Dense-SfM is equipped with a multi-view kernelized matching module leveraging transformer and Gaussian Process architectures, for robust track refinement across multi-views. Evaluations on the ETH3D and Texture-Poor SfM datasets show that Dense-SfM offers significant improvements in accuracy and density over state-of-the-art methods.
Related papers
- GP-GS: Gaussian Processes for Enhanced Gaussian Splatting [10.45038376276218]
This paper proposes a novel 3D reconstruction framework that achieves adaptive and uncertainty-guided densification of sparse SfM point clouds.
The pipeline utilizes uncertainty estimates to guide the pruning of high-variance predictions.
Experiments conducted on synthetic and real-world datasets validate the effectiveness and practicality of the proposed framework.
arXiv Detail & Related papers (2025-02-04T12:50:16Z) - PF3plat: Pose-Free Feed-Forward 3D Gaussian Splatting [54.7468067660037]
PF3plat sets a new state-of-the-art across all benchmarks, supported by comprehensive ablation studies validating our design choices.
Our framework capitalizes on fast speed, scalability, and high-quality 3D reconstruction and view synthesis capabilities of 3DGS.
arXiv Detail & Related papers (2024-10-29T15:28:15Z) - Large Spatial Model: End-to-end Unposed Images to Semantic 3D [79.94479633598102]
Large Spatial Model (LSM) processes unposed RGB images directly into semantic radiance fields.
LSM simultaneously estimates geometry, appearance, and semantics in a single feed-forward operation.
It can generate versatile label maps by interacting with language at novel viewpoints.
arXiv Detail & Related papers (2024-10-24T17:54:42Z) - Robust Incremental Structure-from-Motion with Hybrid Features [73.55745864762703]
We introduce an incremental Structure-from-Motion (SfM) system that leverages lines and their structured geometric relations.
Our system is consistently more robust and accurate compared to the widely used point-based state of the art in SfM.
arXiv Detail & Related papers (2024-09-29T22:20:32Z) - Correspondence-Guided SfM-Free 3D Gaussian Splatting for NVS [52.3215552448623]
Novel View Synthesis (NVS) without Structure-from-Motion (SfM) pre-processed camera poses are crucial for promoting rapid response capabilities and enhancing robustness against variable operating conditions.
Recent SfM-free methods have integrated pose optimization, designing end-to-end frameworks for joint camera pose estimation and NVS.
Most existing works rely on per-pixel image loss functions, such as L2 loss.
In this study, we propose a correspondence-guided SfM-free 3D Gaussian splatting for NVS.
arXiv Detail & Related papers (2024-08-16T13:11:22Z) - Distributed Global Structure-from-Motion with a Deep Front-End [11.2064188838227]
We investigate whether leveraging the developments in feature extraction and matching helps global SfM perform on par with the SOTA incremental SfM approach (COLMAP)
Our SfM system is designed from the ground up to leverage distributed computation, enabling us to parallelize computation on multiple machines and scale to large scenes.
arXiv Detail & Related papers (2023-11-30T18:47:18Z) - AdaSfM: From Coarse Global to Fine Incremental Adaptive Structure from
Motion [48.835456049755166]
AdaSfM is a coarse-to-fine adaptive SfM approach that is scalable to large-scale and challenging datasets.
Our approach first does a coarse global SfM which improves the reliability of the view graph by leveraging measurements from low-cost sensors.
Our approach uses a threshold-adaptive strategy to align all local reconstructions to the coordinate frame of global SfM.
arXiv Detail & Related papers (2023-01-28T09:06:50Z) - DeepMLE: A Robust Deep Maximum Likelihood Estimator for Two-view
Structure from Motion [9.294501649791016]
Two-view structure from motion (SfM) is the cornerstone of 3D reconstruction and visual SLAM (vSLAM)
We formulate the two-view SfM problem as a maximum likelihood estimation (MLE) and solve it with the proposed framework, denoted as DeepMLE.
Our method significantly outperforms the state-of-the-art end-to-end two-view SfM approaches in accuracy and generalization capability.
arXiv Detail & Related papers (2022-10-11T15:07:25Z) - TANDEM: Tracking and Dense Mapping in Real-time using Deep Multi-view
Stereo [55.30992853477754]
We present TANDEM, a real-time monocular tracking and dense framework.
For pose estimation, TANDEM performs photometric bundle adjustment based on a sliding window of alignments.
TANDEM shows state-of-the-art real-time 3D reconstruction performance.
arXiv Detail & Related papers (2021-11-14T19:01:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.