SC-OmniGS: Self-Calibrating Omnidirectional Gaussian Splatting
- URL: http://arxiv.org/abs/2502.04734v1
- Date: Fri, 07 Feb 2025 08:06:30 GMT
- Title: SC-OmniGS: Self-Calibrating Omnidirectional Gaussian Splatting
- Authors: Huajian Huang, Yingshu Chen, Longwei Li, Hui Cheng, Tristan Braud, Yajie Zhao, Sai-Kit Yeung,
- Abstract summary: SC- OmniGS is a novel self-calibrating system for fast and accurate radiance field reconstruction using 360-degree images.<n>We introduce a differentiable omnidirectional camera model in order to rectify the distortion of real-world data for performance enhancement.
- Score: 29.489453234466982
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: 360-degree cameras streamline data collection for radiance field 3D reconstruction by capturing comprehensive scene data. However, traditional radiance field methods do not address the specific challenges inherent to 360-degree images. We present SC-OmniGS, a novel self-calibrating omnidirectional Gaussian splatting system for fast and accurate omnidirectional radiance field reconstruction using 360-degree images. Rather than converting 360-degree images to cube maps and performing perspective image calibration, we treat 360-degree images as a whole sphere and derive a mathematical framework that enables direct omnidirectional camera pose calibration accompanied by 3D Gaussians optimization. Furthermore, we introduce a differentiable omnidirectional camera model in order to rectify the distortion of real-world data for performance enhancement. Overall, the omnidirectional camera intrinsic model, extrinsic poses, and 3D Gaussians are jointly optimized by minimizing weighted spherical photometric loss. Extensive experiments have demonstrated that our proposed SC-OmniGS is able to recover a high-quality radiance field from noisy camera poses or even no pose prior in challenging scenarios characterized by wide baselines and non-object-centric configurations. The noticeable performance gain in the real-world dataset captured by consumer-grade omnidirectional cameras verifies the effectiveness of our general omnidirectional camera model in reducing the distortion of 360-degree images.
Related papers
- Targetless LiDAR-Camera Calibration with Anchored 3D Gaussians [21.057702337896995]
We present a targetless LiDAR-camera calibration method that jointly optimize sensor poses and scene geometry from arbitrary scenes.
We validate our method through extensive experiments on two real-world autonomous driving datasets.
arXiv Detail & Related papers (2025-04-06T20:00:01Z) - AlignDiff: Learning Physically-Grounded Camera Alignment via Diffusion [0.5277756703318045]
We introduce a novel framework that addresses camera intrinsic and extrinsic parameters using a generic ray camera model.
Unlike previous approaches, AlignDiff shifts focus from semantic to geometric features, enabling more accurate modeling of local distortions.
Our experiments demonstrate that the proposed method significantly reduces the angular error of estimated ray bundles by 8.2 degrees and overall calibration accuracy, outperforming existing approaches on challenging, real-world datasets.
arXiv Detail & Related papers (2025-03-27T14:59:59Z) - UniK3D: Universal Camera Monocular 3D Estimation [62.06785782635153]
We present UniK3D, the first generalizable method for monocular 3D estimation able to model any camera.
Our method introduces a spherical 3D representation which allows for better disentanglement of camera and scene geometry.
A comprehensive zero-shot evaluation on 13 diverse datasets demonstrates the state-of-the-art performance of UniK3D across 3D, depth, and camera metrics.
arXiv Detail & Related papers (2025-03-20T17:49:23Z) - IM360: Textured Mesh Reconstruction for Large-scale Indoor Mapping with 360$^\circ$ Cameras [53.53895891356167]
We present a novel 3D reconstruction pipeline for 360$circ$ cameras for 3D mapping and rendering of indoor environments.
Our approach (IM360) leverages the wide field of view of omnidirectional images and integrates the spherical camera model into every core component of the SfM pipeline.
We evaluate our pipeline on large-scale indoor scenes from the Matterport3D and Stanford2D3D datasets.
arXiv Detail & Related papers (2025-02-18T05:15:19Z) - FreeSplatter: Pose-free Gaussian Splatting for Sparse-view 3D Reconstruction [59.77970844874235]
We present FreeSplatter, a feed-forward reconstruction framework capable of generating high-quality 3D Gaussians from sparse-view images.<n>FreeSplatter is built upon a streamlined transformer architecture, comprising sequential self-attention blocks.<n>We show FreeSplatter's potential in enhancing the productivity of downstream applications, such as text/image-to-3D content creation.
arXiv Detail & Related papers (2024-12-12T18:52:53Z) - Radiant: Large-scale 3D Gaussian Rendering based on Hierarchical Framework [13.583584930991847]
We propose Radiant, a hierarchical 3DGS algorithm designed for large-scale scene reconstruction.<n>We show that Radiant improved reconstruction quality by up to 25.7% and reduced up to 79.6% end-to-end latency.
arXiv Detail & Related papers (2024-12-07T05:48:00Z) - PF3plat: Pose-Free Feed-Forward 3D Gaussian Splatting [54.7468067660037]
PF3plat sets a new state-of-the-art across all benchmarks, supported by comprehensive ablation studies validating our design choices.
Our framework capitalizes on fast speed, scalability, and high-quality 3D reconstruction and view synthesis capabilities of 3DGS.
arXiv Detail & Related papers (2024-10-29T15:28:15Z) - OmniGS: Fast Radiance Field Reconstruction using Omnidirectional Gaussian Splatting [27.543561055868697]
Current 3D Gaussian Splatting system only supports radiance field reconstruction using undistorted perspective images.
We present OmniGS, a novel omnidirectional Gaussian splatting system, to take advantage of omnidirectional images for fast radiance field reconstruction.
arXiv Detail & Related papers (2024-04-04T05:10:26Z) - 2D Gaussian Splatting for Geometrically Accurate Radiance Fields [50.056790168812114]
3D Gaussian Splatting (3DGS) has recently revolutionized radiance field reconstruction, achieving high quality novel view synthesis and fast rendering speed without baking.
We present 2D Gaussian Splatting (2DGS), a novel approach to model and reconstruct geometrically accurate radiance fields from multi-view images.
We demonstrate that our differentiable terms allows for noise-free and detailed geometry reconstruction while maintaining competitive appearance quality, fast training speed, and real-time rendering.
arXiv Detail & Related papers (2024-03-26T17:21:24Z) - Gaussian Splatting on the Move: Blur and Rolling Shutter Compensation for Natural Camera Motion [25.54868552979793]
We present a method that adapts to camera motion and allows high-quality scene reconstruction with handheld video data.
Our results with both synthetic and real data demonstrate superior performance in mitigating camera motion over existing methods.
arXiv Detail & Related papers (2024-03-20T06:19:41Z) - W-HMR: Monocular Human Mesh Recovery in World Space with Weak-Supervised Calibration [57.37135310143126]
Previous methods for 3D motion recovery from monocular images often fall short due to reliance on camera coordinates.
We introduce W-HMR, a weak-supervised calibration method that predicts "reasonable" focal lengths based on body distortion information.
We also present the OrientCorrect module, which corrects body orientation for plausible reconstructions in world space.
arXiv Detail & Related papers (2023-11-29T09:02:07Z) - Towards Nonlinear-Motion-Aware and Occlusion-Robust Rolling Shutter
Correction [54.00007868515432]
Existing methods face challenges in estimating the accurate correction field due to the uniform velocity assumption.
We propose a geometry-based Quadratic Rolling Shutter (QRS) motion solver, which precisely estimates the high-order correction field of individual pixels.
Our method surpasses the state-of-the-art by +4.98, +0.77, and +4.33 of PSNR on Carla-RS, Fastec-RS, and BS-RSC datasets, respectively.
arXiv Detail & Related papers (2023-03-31T15:09:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.