TrackGS: Optimizing COLMAP-Free 3D Gaussian Splatting with Global Track Constraints
- URL: http://arxiv.org/abs/2502.19800v2
- Date: Wed, 12 Mar 2025 08:03:52 GMT
- Title: TrackGS: Optimizing COLMAP-Free 3D Gaussian Splatting with Global Track Constraints
- Authors: Dongbo Shi, Shen Cao, Lubin Fan, Bojian Wu, Jinhui Guo, Renjie Chen, Ligang Liu, Jieping Ye,
- Abstract summary: We introduce TrackGS, which incorporates feature tracks to globally constrain multi-view geometry.<n>We also propose minimizing both reprojection and backprojection errors for better geometric consistency.<n>By deriving the gradient of intrinsics, we unify camera parameter estimation with 3DGS training into a joint optimization framework.
- Score: 40.9371798496134
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: While 3D Gaussian Splatting (3DGS) has advanced ability on novel view synthesis, it still depends on accurate pre-computaed camera parameters, which are hard to obtain and prone to noise. Previous COLMAP-Free methods optimize camera poses using local constraints, but they often struggle in complex scenarios. To address this, we introduce TrackGS, which incorporates feature tracks to globally constrain multi-view geometry. We select the Gaussians associated with each track, which will be trained and rescaled to an infinitesimally small size to guarantee the spatial accuracy. We also propose minimizing both reprojection and backprojection errors for better geometric consistency. Moreover, by deriving the gradient of intrinsics, we unify camera parameter estimation with 3DGS training into a joint optimization framework, achieving SOTA performance on challenging datasets with severe camera movements.
Related papers
- 3R-GS: Best Practice in Optimizing Camera Poses Along with 3DGS [36.48425755917156]
3D Gaussian Splatting (3DGS) has revolutionized neural rendering with its efficiency and quality.
It heavily depends on accurate camera poses from Structure-from-Motion (SfM) systems.
We present 3R-GS, a 3D Gaussian Splatting framework that bridges this gap.
arXiv Detail & Related papers (2025-04-05T22:31:08Z) - Coca-Splat: Collaborative Optimization for Camera Parameters and 3D Gaussians [26.3996055215988]
Coca-Splat is a novel approach to address the challenges of sparse view pose-free scene reconstruction and novel view synthesis (NVS)
Inspired by deformable DEtection TRansformer, we design separate queries for 3D Gaussians and camera parameters.
We update them layer by layer through deformable Transformer layers, enabling joint optimization in a single network.
arXiv Detail & Related papers (2025-04-01T10:48:46Z) - 3DGS$^2$: Near Second-order Converging 3D Gaussian Splatting [26.94968605302451]
3D Gaussian Splatting (3DGS) has emerged as a mainstream solution for novel view synthesis and 3D reconstruction.<n>This paper introduces a (near) second-order convergent training algorithm for 3DGS, leveraging its unique properties.
arXiv Detail & Related papers (2025-01-22T22:28:11Z) - KeyGS: A Keyframe-Centric Gaussian Splatting Method for Monocular Image Sequences [14.792295042683254]
We present an efficient framework that operates without any depth or matching model.<n>We propose a coarse-to-fine frequency-aware densification to reconstruct different levels of details.
arXiv Detail & Related papers (2024-12-30T07:32:35Z) - GPS-Gaussian+: Generalizable Pixel-wise 3D Gaussian Splatting for Real-Time Human-Scene Rendering from Sparse Views [67.34073368933814]
We propose a generalizable Gaussian Splatting approach for high-resolution image rendering under a sparse-view camera setting.
We train our Gaussian parameter regression module on human-only data or human-scene data, jointly with a depth estimation module to lift 2D parameter maps to 3D space.
Experiments on several datasets demonstrate that our method outperforms state-of-the-art methods while achieving an exceeding rendering speed.
arXiv Detail & Related papers (2024-11-18T08:18:44Z) - No Pose, No Problem: Surprisingly Simple 3D Gaussian Splats from Sparse Unposed Images [100.80376573969045]
NoPoSplat is a feed-forward model capable of reconstructing 3D scenes parameterized by 3D Gaussians from multi-view images.
Our model achieves real-time 3D Gaussian reconstruction during inference.
This work makes significant advances in pose-free generalizable 3D reconstruction and demonstrates its applicability to real-world scenarios.
arXiv Detail & Related papers (2024-10-31T17:58:22Z) - PF3plat: Pose-Free Feed-Forward 3D Gaussian Splatting [54.7468067660037]
PF3plat sets a new state-of-the-art across all benchmarks, supported by comprehensive ablation studies validating our design choices.
Our framework capitalizes on fast speed, scalability, and high-quality 3D reconstruction and view synthesis capabilities of 3DGS.
arXiv Detail & Related papers (2024-10-29T15:28:15Z) - Look Gauss, No Pose: Novel View Synthesis using Gaussian Splatting without Accurate Pose Initialization [11.418632671254564]
3D Gaussian Splatting has emerged as a powerful tool for fast and accurate novel-view synthesis from a set of posed input images.
We propose an extension to the 3D Gaussian Splatting framework by optimizing the extrinsic camera parameters with respect to photometric residuals.
We show results on real-world scenes and complex trajectories through simulated environments.
arXiv Detail & Related papers (2024-10-11T12:01:15Z) - PUP 3D-GS: Principled Uncertainty Pruning for 3D Gaussian Splatting [59.277480452459315]
We propose a principled sensitivity pruning score that preserves visual fidelity and foreground details at significantly higher compression ratios.<n>We also propose a multi-round prune-refine pipeline that can be applied to any pretrained 3D-GS model without changing its training pipeline.
arXiv Detail & Related papers (2024-06-14T17:53:55Z) - LP-3DGS: Learning to Prune 3D Gaussian Splatting [71.97762528812187]
We propose learning-to-prune 3DGS, where a trainable binary mask is applied to the importance score that can find optimal pruning ratio automatically.
Experiments have shown that LP-3DGS consistently produces a good balance that is both efficient and high quality.
arXiv Detail & Related papers (2024-05-29T05:58:34Z) - Gaussian Splatting on the Move: Blur and Rolling Shutter Compensation for Natural Camera Motion [25.54868552979793]
We present a method that adapts to camera motion and allows high-quality scene reconstruction with handheld video data.
Our results with both synthetic and real data demonstrate superior performance in mitigating camera motion over existing methods.
arXiv Detail & Related papers (2024-03-20T06:19:41Z) - GGRt: Towards Pose-free Generalizable 3D Gaussian Splatting in Real-time [112.32349668385635]
GGRt is a novel approach to generalizable novel view synthesis that alleviates the need for real camera poses.
As the first pose-free generalizable 3D-GS framework, GGRt achieves inference at $ge$ 5 FPS and real-time rendering at $ge$ 100 FPS.
arXiv Detail & Related papers (2024-03-15T09:47:35Z) - FSGS: Real-Time Few-shot View Synthesis using Gaussian Splatting [58.41056963451056]
We propose a few-shot view synthesis framework based on 3D Gaussian Splatting.
This framework enables real-time and photo-realistic view synthesis with as few as three training views.
FSGS achieves state-of-the-art performance in both accuracy and rendering efficiency across diverse datasets.
arXiv Detail & Related papers (2023-12-01T09:30:02Z) - Efficient Global Optimization of Non-differentiable, Symmetric
Objectives for Multi Camera Placement [0.0]
We propose a novel iterative method for optimally placing and orienting multiple cameras in a 3D scene.
Sample applications include improving the accuracy of 3D reconstruction, maximizing the covered area for surveillance, or improving the coverage in multi-viewpoint pedestrian tracking.
arXiv Detail & Related papers (2021-03-20T17:01:15Z) - Spatiotemporal Bundle Adjustment for Dynamic 3D Human Reconstruction in
the Wild [49.672487902268706]
We present a framework that jointly estimates camera temporal alignment and 3D point triangulation.
We reconstruct 3D motion trajectories of human bodies in events captured by multiple unsynchronized and unsynchronized video cameras.
arXiv Detail & Related papers (2020-07-24T23:50:46Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.