ExploreGS: Explorable 3D Scene Reconstruction with Virtual Camera Samplings and Diffusion Priors
- URL: http://arxiv.org/abs/2508.06014v1
- Date: Fri, 08 Aug 2025 05:01:17 GMT
- Title: ExploreGS: Explorable 3D Scene Reconstruction with Virtual Camera Samplings and Diffusion Priors
- Authors: Minsu Kim, Subin Jeon, In Cho, Mijin Yoo, Seon Joo Kim,
- Abstract summary: We propose a 3DGS-based pipeline that generates additional training views to enhance reconstruction.<n>Fine-tuning 3D Gaussians with these enhanced views significantly improves reconstruction quality.<n> Experiments demonstrate that our approach outperforms existing 3DGS-based methods.
- Score: 37.455535904703204
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Recent advances in novel view synthesis (NVS) have enabled real-time rendering with 3D Gaussian Splatting (3DGS). However, existing methods struggle with artifacts and missing regions when rendering from viewpoints that deviate from the training trajectory, limiting seamless scene exploration. To address this, we propose a 3DGS-based pipeline that generates additional training views to enhance reconstruction. We introduce an information-gain-driven virtual camera placement strategy to maximize scene coverage, followed by video diffusion priors to refine rendered results. Fine-tuning 3D Gaussians with these enhanced views significantly improves reconstruction quality. To evaluate our method, we present Wild-Explore, a benchmark designed for challenging scene exploration. Experiments demonstrate that our approach outperforms existing 3DGS-based methods, enabling high-quality, artifact-free rendering from arbitrary viewpoints. https://exploregs.github.io
Related papers
- GSFixer: Improving 3D Gaussian Splatting with Reference-Guided Video Diffusion Priors [44.901133648775605]
GSFixer is a framework designed to improve the quality of 3DGS representations reconstructed from sparse inputs.<n>Our model integrates both 2D semantic features and 3D geometric features of reference views extracted from the visual geometry foundation model.<n>Considering the lack of suitable benchmarks for 3DGS artifact restoration evaluation, we present DL3DV-Res which contains artifact frames rendered using low-quality 3DGS.
arXiv Detail & Related papers (2025-08-13T09:56:28Z) - DIP-GS: Deep Image Prior For Gaussian Splatting Sparse View Recovery [31.43307762723943]
3D Gaussian Splatting (3DGS) is a leading 3D scene reconstruction method, obtaining high-quality reconstruction with real-time rendering performance.<n>While achieving superior performance in the presence of many views, 3DGS struggles with sparse view reconstruction, where the input views are sparse and do not fully cover the scene and have low overlaps.<n>In this paper, we propose DIP-GS, a Deep Image Prior (DIP) 3DGS representation.
arXiv Detail & Related papers (2025-08-10T14:47:32Z) - EVolSplat: Efficient Volume-based Gaussian Splatting for Urban View Synthesis [61.1662426227688]
Existing NeRF and 3DGS-based methods show promising results in achieving photorealistic renderings but require slow, per-scene optimization.<n>We introduce EVolSplat, an efficient 3D Gaussian Splatting model for urban scenes that works in a feed-forward manner.
arXiv Detail & Related papers (2025-03-26T02:47:27Z) - S3R-GS: Streamlining the Pipeline for Large-Scale Street Scene Reconstruction [58.37746062258149]
3D Gaussian Splatting (3DGS) has reshaped the field of 3D reconstruction, achieving impressive rendering quality and speed.<n>Existing methods suffer from rapidly escalating per-viewpoint reconstruction costs as scene size increases.<n>We propose S3R-GS, a 3DGS framework that Streamlines the pipeline for large-scale Street Scene Reconstruction.
arXiv Detail & Related papers (2025-03-11T09:37:13Z) - Beyond Gaussians: Fast and High-Fidelity 3D Splatting with Linear Kernels [51.08794269211701]
We introduce 3D Linear Splatting (3DLS), which replaces Gaussian kernels with linear kernels to achieve sharper and more precise results.<n>3DLS demonstrates state-of-the-art fidelity and accuracy, along with a 30% FPS improvement over baseline 3DGS.
arXiv Detail & Related papers (2024-11-19T11:59:54Z) - 3DGS-Enhancer: Enhancing Unbounded 3D Gaussian Splatting with View-consistent 2D Diffusion Priors [13.191199172286508]
Novel-view synthesis aims to generate novel views of a scene from multiple input images or videos.
3DGS-Enhancer is a novel pipeline for enhancing the representation quality of 3DGS representations.
arXiv Detail & Related papers (2024-10-21T17:59:09Z) - SpikeGS: 3D Gaussian Splatting from Spike Streams with High-Speed Camera Motion [46.23575738669567]
Novel View Synthesis plays a crucial role by generating new 2D renderings from multi-view images of 3D scenes.
High-frame-rate dense 3D reconstruction emerges as a vital technique, enabling detailed and accurate modeling of real-world objects or scenes.
Spike cameras, a novel type of neuromorphic sensor, continuously record scenes with an ultra-high temporal resolution.
arXiv Detail & Related papers (2024-07-14T03:19:30Z) - PUP 3D-GS: Principled Uncertainty Pruning for 3D Gaussian Splatting [59.277480452459315]
We propose a principled sensitivity pruning score that preserves visual fidelity and foreground details at significantly higher compression ratios.<n>We also propose a multi-round prune-refine pipeline that can be applied to any pretrained 3D-GS model without changing its training pipeline.
arXiv Detail & Related papers (2024-06-14T17:53:55Z) - Bootstrap-GS: Self-Supervised Augmentation for High-Fidelity Gaussian Splatting [9.817215106596146]
3D-GS faces limitations when generating novel views that significantly deviate from those encountered during training.<n>We introduce a bootstrapping framework to address this problem.<n>Our approach synthesizes pseudo-ground truth from novel views that align with the limited training set.
arXiv Detail & Related papers (2024-04-29T12:57:05Z) - GaussianPro: 3D Gaussian Splatting with Progressive Propagation [49.918797726059545]
3DGS relies heavily on the point cloud produced by Structure-from-Motion (SfM) techniques.
We propose a novel method that applies a progressive propagation strategy to guide the densification of the 3D Gaussians.
Our method significantly surpasses 3DGS on the dataset, exhibiting an improvement of 1.15dB in terms of PSNR.
arXiv Detail & Related papers (2024-02-22T16:00:20Z) - SparseGS: Real-Time 360° Sparse View Synthesis using Gaussian Splatting [6.506706621221143]
3D Splatting (3DGS) has recently enabled real-time rendering of 3D scenes for novel view synthesis.<n>This technique requires dense training views to accurately reconstruct 3D geometry.<n>We introduce SparseGS, an efficient training pipeline designed to address the limitations of 3DGS in scenarios with sparse training views.
arXiv Detail & Related papers (2023-11-30T21:38:22Z) - Scaffold-GS: Structured 3D Gaussians for View-Adaptive Rendering [71.44349029439944]
Recent 3D Gaussian Splatting method has achieved the state-of-the-art rendering quality and speed.
We introduce Scaffold-GS, which uses anchor points to distribute local 3D Gaussians.
We show that our method effectively reduces redundant Gaussians while delivering high-quality rendering.
arXiv Detail & Related papers (2023-11-30T17:58:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.