3R-GS: Best Practice in Optimizing Camera Poses Along with 3DGS
- URL: http://arxiv.org/abs/2504.04294v1
- Date: Sat, 05 Apr 2025 22:31:08 GMT
- Title: 3R-GS: Best Practice in Optimizing Camera Poses Along with 3DGS
- Authors: Zhisheng Huang, Peng Wang, Jingdong Zhang, Yuan Liu, Xin Li, Wenping Wang,
- Abstract summary: 3D Gaussian Splatting (3DGS) has revolutionized neural rendering with its efficiency and quality.<n>It heavily depends on accurate camera poses from Structure-from-Motion (SfM) systems.<n>We present 3R-GS, a 3D Gaussian Splatting framework that bridges this gap.
- Score: 36.48425755917156
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: 3D Gaussian Splatting (3DGS) has revolutionized neural rendering with its efficiency and quality, but like many novel view synthesis methods, it heavily depends on accurate camera poses from Structure-from-Motion (SfM) systems. Although recent SfM pipelines have made impressive progress, questions remain about how to further improve both their robust performance in challenging conditions (e.g., textureless scenes) and the precision of camera parameter estimation simultaneously. We present 3R-GS, a 3D Gaussian Splatting framework that bridges this gap by jointly optimizing 3D Gaussians and camera parameters from large reconstruction priors MASt3R-SfM. We note that naively performing joint 3D Gaussian and camera optimization faces two challenges: the sensitivity to the quality of SfM initialization, and its limited capacity for global optimization, leading to suboptimal reconstruction results. Our 3R-GS, overcomes these issues by incorporating optimized practices, enabling robust scene reconstruction even with imperfect camera registration. Extensive experiments demonstrate that 3R-GS delivers high-quality novel view synthesis and precise camera pose estimation while remaining computationally efficient. Project page: https://zsh523.github.io/3R-GS/
Related papers
- A Constrained Optimization Approach for Gaussian Splatting from Coarsely-posed Images and Noisy Lidar Point Clouds [37.043012716944496]
We introduce a constrained optimization method for simultaneous camera pose estimation and 3D reconstruction.
Experiments demonstrate that the proposed method significantly outperforms the existing (multi-modal) 3DGS baseline.
arXiv Detail & Related papers (2025-04-12T08:34:43Z) - DashGaussian: Optimizing 3D Gaussian Splatting in 200 Seconds [71.37326848614133]
We propose DashGaussian, a scheduling scheme over the optimization complexity of 3DGS.
We show that our method accelerates the optimization of various 3DGS backbones by 45.7% on average.
arXiv Detail & Related papers (2025-03-24T07:17:27Z) - TrackGS: Optimizing COLMAP-Free 3D Gaussian Splatting with Global Track Constraints [40.9371798496134]
We introduce TrackGS, which incorporates feature tracks to globally constrain multi-view geometry.<n>We also propose minimizing both reprojection and backprojection errors for better geometric consistency.<n>By deriving the gradient of intrinsics, we unify camera parameter estimation with 3DGS training into a joint optimization framework.
arXiv Detail & Related papers (2025-02-27T06:16:04Z) - PF3plat: Pose-Free Feed-Forward 3D Gaussian Splatting [54.7468067660037]
PF3plat sets a new state-of-the-art across all benchmarks, supported by comprehensive ablation studies validating our design choices.
Our framework capitalizes on fast speed, scalability, and high-quality 3D reconstruction and view synthesis capabilities of 3DGS.
arXiv Detail & Related papers (2024-10-29T15:28:15Z) - Look Gauss, No Pose: Novel View Synthesis using Gaussian Splatting without Accurate Pose Initialization [11.418632671254564]
3D Gaussian Splatting has emerged as a powerful tool for fast and accurate novel-view synthesis from a set of posed input images.
We propose an extension to the 3D Gaussian Splatting framework by optimizing the extrinsic camera parameters with respect to photometric residuals.
We show results on real-world scenes and complex trajectories through simulated environments.
arXiv Detail & Related papers (2024-10-11T12:01:15Z) - Visual SLAM with 3D Gaussian Primitives and Depth Priors Enabling Novel View Synthesis [11.236094544193605]
Conventional geometry-based SLAM systems lack dense 3D reconstruction capabilities.
We propose a real-time RGB-D SLAM system that incorporates a novel view synthesis technique, 3D Gaussian Splatting.
arXiv Detail & Related papers (2024-08-10T21:23:08Z) - Free-SurGS: SfM-Free 3D Gaussian Splatting for Surgical Scene Reconstruction [36.46068581419659]
Real-time 3D reconstruction of surgical scenes plays a vital role in computer-assisted surgery.
Recent advancements in 3D Gaussian Splatting have shown great potential for real-time novel view synthesis.
We propose the first SfM-free 3DGS-based method for surgical scene reconstruction.
arXiv Detail & Related papers (2024-07-03T08:49:35Z) - PUP 3D-GS: Principled Uncertainty Pruning for 3D Gaussian Splatting [59.277480452459315]
We propose a principled sensitivity pruning score that preserves visual fidelity and foreground details at significantly higher compression ratios.<n>We also propose a multi-round prune-refine pipeline that can be applied to any pretrained 3D-GS model without changing its training pipeline.
arXiv Detail & Related papers (2024-06-14T17:53:55Z) - LP-3DGS: Learning to Prune 3D Gaussian Splatting [71.97762528812187]
We propose learning-to-prune 3DGS, where a trainable binary mask is applied to the importance score that can find optimal pruning ratio automatically.
Experiments have shown that LP-3DGS consistently produces a good balance that is both efficient and high quality.
arXiv Detail & Related papers (2024-05-29T05:58:34Z) - GaussianPro: 3D Gaussian Splatting with Progressive Propagation [49.918797726059545]
3DGS relies heavily on the point cloud produced by Structure-from-Motion (SfM) techniques.
We propose a novel method that applies a progressive propagation strategy to guide the densification of the 3D Gaussians.
Our method significantly surpasses 3DGS on the dataset, exhibiting an improvement of 1.15dB in terms of PSNR.
arXiv Detail & Related papers (2024-02-22T16:00:20Z) - COLMAP-Free 3D Gaussian Splatting [88.420322646756]
We propose a novel method to perform novel view synthesis without any SfM preprocessing.
We process the input frames in a sequential manner and progressively grow the 3D Gaussians set by taking one input frame at a time.
Our method significantly improves over previous approaches in view synthesis and camera pose estimation under large motion changes.
arXiv Detail & Related papers (2023-12-12T18:39:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.