Pseudo-View Enhancement via Confidence Fusion for Unposed Sparse-View Reconstruction
- URL: http://arxiv.org/abs/2602.21535v1
- Date: Wed, 25 Feb 2026 03:45:47 GMT
- Title: Pseudo-View Enhancement via Confidence Fusion for Unposed Sparse-View Reconstruction
- Authors: Beizhen Zhao, Sicheng Yu, Guanzhi Ding, Yu Hu, Hao Wang,
- Abstract summary: 3D scene reconstruction under unposed sparse viewpoints is a highly challenging yet practically important problem.<n>We propose a novel framework for sparse-view outdoor reconstruction that achieves high-quality results through bidirectional pseudo frame restoration.<n>Designs significantly enhance reconstruction completeness, suppress floating artifacts and improve overall geometric consistency under extreme view sparsity.
- Score: 7.8268279161630785
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: 3D scene reconstruction under unposed sparse viewpoints is a highly challenging yet practically important problem, especially in outdoor scenes due to complex lighting and scale variation. With extremely limited input views, directly utilizing diffusion model to synthesize pseudo frames will introduce unreasonable geometry, which will harm the final reconstruction quality. To address these issues, we propose a novel framework for sparse-view outdoor reconstruction that achieves high-quality results through bidirectional pseudo frame restoration and scene perception Gaussian management. Specifically, we introduce a bidirectional pseudo frame restoration method that restores missing content by diffusion-based synthesis guided by adjacent frames with a lightweight pseudo-view deblur model and confidence mask inference algorithm. Then we propose a scene perception Gaussian management strategy that optimize Gaussians based on joint depth-density information. These designs significantly enhance reconstruction completeness, suppress floating artifacts and improve overall geometric consistency under extreme view sparsity. Experiments on outdoor benchmarks demonstrate substantial gains over existing methods in both fidelity and stability.
Related papers
- SplatBright: Generalizable Low-Light Scene Reconstruction from Sparse Views via Physically-Guided Gaussian Enhancement [26.905118897488077]
SplatBright is the first generalizable 3D Gaussian framework for joint low-light enhancement and reconstruction from sparse sRGB inputs.<n>Our key idea is to integrate physically guided illumination modeling with geometry-appearance decoupling for consistent low-light reconstruction.<n>Experiments on public and self-collected datasets demonstrate that SplatBright achieves superior novel view synthesis, cross-view consistency, and better generalization to unseen low-light scenes compared with both 2D and 3D methods.
arXiv Detail & Related papers (2025-12-21T09:06:16Z) - G4Splat: Geometry-Guided Gaussian Splatting with Generative Prior [53.762256749551284]
We identify accurate geometry as the fundamental prerequisite for effectively exploiting generative models to enhance 3D scene reconstruction.<n>We incorporate this geometry guidance throughout the generative pipeline to improve visibility mask estimation, guide novel view selection, and enhance multi-view consistency when inpainting with video diffusion models.<n>Our method naturally supports single-view inputs and unposed videos, with strong generalizability in both indoor and outdoor scenarios.
arXiv Detail & Related papers (2025-10-14T03:06:28Z) - Gesplat: Robust Pose-Free 3D Reconstruction via Geometry-Guided Gaussian Splatting [21.952325954391508]
We introduce Gesplat, a 3DGS-based framework that enables robust novel view synthesis and geometrically consistent reconstruction from unposed sparse images.<n>Our approach achieves more robust performance on both forward-facing and large-scale complex datasets compared to other pose-free methods.
arXiv Detail & Related papers (2025-10-11T08:13:46Z) - Inference-Time Search using Side Information for Diffusion-based Image Reconstruction [8.116163579233064]
We propose a novel inference-time search algorithm that guides the sampling process using the side information.<n>Our approach can be seamlessly integrated into a wide range of existing diffusion-based image reconstruction pipelines.
arXiv Detail & Related papers (2025-10-02T20:16:00Z) - SparseRecon: Neural Implicit Surface Reconstruction from Sparse Views with Feature and Depth Consistencies [48.99420012507374]
We propose SparseRecon, a novel neural implicit reconstruction method for sparse views with volume rendering-based feature consistency and uncertainty-guided depth constraint.<n>We show that our method outperforms the state-of-the-art methods, which can produce high-quality geometry with sparse-view input.
arXiv Detail & Related papers (2025-08-01T06:51:32Z) - Reflections Unlock: Geometry-Aware Reflection Disentanglement in 3D Gaussian Splatting for Photorealistic Scenes Rendering [51.223347330075576]
Ref-Unlock is a novel geometry-aware reflection modeling framework based on 3D Gaussian Splatting.<n>Our approach employs a dual-branch representation with high-order spherical harmonics to capture high-frequency reflective details.<n>Our method thus offers an efficient and generalizable solution for realistic rendering of reflective scenes.
arXiv Detail & Related papers (2025-07-08T15:45:08Z) - HiNeuS: High-fidelity Neural Surface Mitigating Low-texture and Reflective Ambiguity [8.74691272469226]
HiNeuS is a unified framework that holistically addresses three core limitations in existing approaches.<n>We introduce: 1) Differential visibility verification through SDF-guided ray tracing; 2) Planar-conformal regularization via ray-aligned geometry patches; and 3) Physically-grounded Eikonal relaxation that dynamically modulates geometric constraints based on local gradients.
arXiv Detail & Related papers (2025-06-30T13:45:25Z) - Decompositional Neural Scene Reconstruction with Generative Diffusion Prior [64.71091831762214]
Decompositional reconstruction of 3D scenes, with complete shapes and detailed texture, is intriguing for downstream applications.<n>Recent approaches incorporate semantic or geometric regularization to address this issue, but they suffer significant degradation in underconstrained areas.<n>We propose DP-Recon, which employs diffusion priors in the form of Score Distillation Sampling (SDS) to optimize the neural representation of each individual object under novel views.
arXiv Detail & Related papers (2025-03-19T02:11:31Z) - GUS-IR: Gaussian Splatting with Unified Shading for Inverse Rendering [83.69136534797686]
We present GUS-IR, a novel framework designed to address the inverse rendering problem for complicated scenes featuring rough and glossy surfaces.
This paper starts by analyzing and comparing two prominent shading techniques popularly used for inverse rendering, forward shading and deferred shading.
We propose a unified shading solution that combines the advantages of both techniques for better decomposition.
arXiv Detail & Related papers (2024-11-12T01:51:05Z) - Towards High Fidelity Monocular Face Reconstruction with Rich
Reflectance using Self-supervised Learning and Ray Tracing [49.759478460828504]
Methods combining deep neural network encoders with differentiable rendering have opened up the path for very fast monocular reconstruction of geometry, lighting and reflectance.
ray tracing was introduced for monocular face reconstruction within a classic optimization-based framework.
We propose a new method that greatly improves reconstruction quality and robustness in general scenes.
arXiv Detail & Related papers (2021-03-29T08:58:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.