Extreme Views: 3DGS Filter for Novel View Synthesis from Out-of-Distribution Camera Poses
- URL: http://arxiv.org/abs/2510.20027v1
- Date: Wed, 22 Oct 2025 21:09:16 GMT
- Title: Extreme Views: 3DGS Filter for Novel View Synthesis from Out-of-Distribution Camera Poses
- Authors: Damian Bowness, Charalambos Poullis,
- Abstract summary: When viewing a 3D Gaussian Splatting (3DGS) model from camera positions significantly outside the training data distribution, substantial visual noise commonly occurs.<n>We propose a novel real-time render-aware filtering method to address this issue.<n>Our method substantially improves visual quality, realism, and consistency compared to existing Neural Radiance Field (NeRF)-based approaches.
- Score: 3.007949058551534
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: When viewing a 3D Gaussian Splatting (3DGS) model from camera positions significantly outside the training data distribution, substantial visual noise commonly occurs. These artifacts result from the lack of training data in these extrapolated regions, leading to uncertain density, color, and geometry predictions from the model. To address this issue, we propose a novel real-time render-aware filtering method. Our approach leverages sensitivity scores derived from intermediate gradients, explicitly targeting instabilities caused by anisotropic orientations rather than isotropic variance. This filtering method directly addresses the core issue of generative uncertainty, allowing 3D reconstruction systems to maintain high visual fidelity even when users freely navigate outside the original training viewpoints. Experimental evaluation demonstrates that our method substantially improves visual quality, realism, and consistency compared to existing Neural Radiance Field (NeRF)-based approaches such as BayesRays. Critically, our filter seamlessly integrates into existing 3DGS rendering pipelines in real-time, unlike methods that require extensive post-hoc retraining or fine-tuning. Code and results at https://damian-bowness.github.io/EV3DGS
Related papers
- Pi-GS: Sparse-View Gaussian Splatting with Dense π^3 Initialization [5.5775900281150514]
We propose a robust method utilizing 3, a reference-free point cloud estimation network.<n>We employ uncertainty-guided depth supervision, normal consistency loss, and depth warping.<n>Our approach achieves state-of-the-art performance on the Tanks and Temples, LLFF, DTU, and MipNeRF360 datasets.
arXiv Detail & Related papers (2026-02-03T09:55:03Z) - EGG-Fusion: Efficient 3D Reconstruction with Geometry-aware Gaussian Surfel on the Fly [8.803716785929936]
EGG-Fusion is a novel differentiable-rendering-based real-time reconstruction system.<n>The proposed system achieves a surface reconstruction error of 0.6textitcm, representing over 20% improvement in accuracy compared to state-of-the-art methods.<n> Notably, the system maintains real-time processing capabilities at 24 FPS, establishing it as one of the most accurate differentiable-rendering-based real-time reconstruction systems.
arXiv Detail & Related papers (2025-12-01T05:32:17Z) - Pseudo Depth Meets Gaussian: A Feed-forward RGB SLAM Baseline [64.42938561167402]
We propose an online 3D reconstruction method using 3D Gaussian-based SLAM, combined with a feed-forward recurrent prediction module.<n>This approach replaces slow test-time optimization with fast network inference, significantly improving tracking speed.<n>Our method achieves performance on par with the state-of-the-art SplaTAM, while reducing tracking time by more than 90%.
arXiv Detail & Related papers (2025-08-06T16:16:58Z) - EVolSplat: Efficient Volume-based Gaussian Splatting for Urban View Synthesis [61.1662426227688]
Existing NeRF and 3DGS-based methods show promising results in achieving photorealistic renderings but require slow, per-scene optimization.<n>We introduce EVolSplat, an efficient 3D Gaussian Splatting model for urban scenes that works in a feed-forward manner.
arXiv Detail & Related papers (2025-03-26T02:47:27Z) - Taming Video Diffusion Prior with Scene-Grounding Guidance for 3D Gaussian Splatting from Sparse Inputs [28.381287866505637]
We propose a reconstruction by generation pipeline that leverages learned priors from video diffusion models to provide plausible interpretations for regions outside the field of view or occluded.<n>We introduce a novel scene-grounding guidance based on rendered sequences from an optimized 3DGS, which tames the diffusion model to generate consistent sequences.<n>Our method significantly improves upon the baseline and achieves state-of-the-art performance on challenging benchmarks.
arXiv Detail & Related papers (2025-03-07T01:59:05Z) - MVS-GS: High-Quality 3D Gaussian Splatting Mapping via Online Multi-View Stereo [9.740087094317735]
We propose a novel framework for high-quality 3DGS modeling using an online multi-view stereo approach.<n>Our method estimates MVS depth using sequential frames from a local time window and applies comprehensive depth refinement techniques.<n> Experimental results demonstrate that our method outperforms state-of-the-art dense SLAM methods.
arXiv Detail & Related papers (2024-12-26T09:20:04Z) - Beyond Gaussians: Fast and High-Fidelity 3D Splatting with Linear Kernels [51.08794269211701]
We introduce 3D Linear Splatting (3DLS), which replaces Gaussian kernels with linear kernels to achieve sharper and more precise results.<n>3DLS demonstrates state-of-the-art fidelity and accuracy, along with a 30% FPS improvement over baseline 3DGS.
arXiv Detail & Related papers (2024-11-19T11:59:54Z) - GPS-Gaussian+: Generalizable Pixel-wise 3D Gaussian Splatting for Real-Time Human-Scene Rendering from Sparse Views [67.34073368933814]
We propose a generalizable Gaussian Splatting approach for high-resolution image rendering under a sparse-view camera setting.
We train our Gaussian parameter regression module on human-only data or human-scene data, jointly with a depth estimation module to lift 2D parameter maps to 3D space.
Experiments on several datasets demonstrate that our method outperforms state-of-the-art methods while achieving an exceeding rendering speed.
arXiv Detail & Related papers (2024-11-18T08:18:44Z) - PF3plat: Pose-Free Feed-Forward 3D Gaussian Splatting [54.7468067660037]
PF3plat sets a new state-of-the-art across all benchmarks, supported by comprehensive ablation studies validating our design choices.<n>Our framework capitalizes on fast speed, scalability, and high-quality 3D reconstruction and view synthesis capabilities of 3DGS.
arXiv Detail & Related papers (2024-10-29T15:28:15Z) - Bootstrap-GS: Self-Supervised Augmentation for High-Fidelity Gaussian Splatting [9.817215106596146]
3D-GS faces limitations when generating novel views that significantly deviate from those encountered during training.<n>We introduce a bootstrapping framework to address this problem.<n>Our approach synthesizes pseudo-ground truth from novel views that align with the limited training set.
arXiv Detail & Related papers (2024-04-29T12:57:05Z) - SGD: Street View Synthesis with Gaussian Splatting and Diffusion Prior [53.52396082006044]
Current methods struggle to maintain rendering quality at the viewpoint that deviates significantly from the training viewpoints.
This issue stems from the sparse training views captured by a fixed camera on a moving vehicle.
We propose a novel approach that enhances the capacity of 3DGS by leveraging prior from a Diffusion Model.
arXiv Detail & Related papers (2024-03-29T09:20:29Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.