SR3R: Rethinking Super-Resolution 3D Reconstruction With Feed-Forward Gaussian Splatting
- URL: http://arxiv.org/abs/2602.24020v1
- Date: Fri, 27 Feb 2026 13:45:22 GMT
- Title: SR3R: Rethinking Super-Resolution 3D Reconstruction With Feed-Forward Gaussian Splatting
- Authors: Xiang Feng, Xiangbo Wang, Tieshi Zhong, Chengkai Wang, Yiting Zhao, Tianxiang Xu, Zhenzhong Kuang, Feiwei Qin, Xuefei Yin, Yanming Zhu,
- Abstract summary: 3D super-resolution (3DSR) aims to reconstruct high-resolution (HR) 3D scenes from low-resolution (LR) multi-view images.<n>Existing methods rely on dense LR inputs and per-scene optimization.<n>We introduce SR3R, a feed-forward framework that directly predicts HR 3DGS representations from sparse LR views.
- Score: 13.122286465610323
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: 3D super-resolution (3DSR) aims to reconstruct high-resolution (HR) 3D scenes from low-resolution (LR) multi-view images. Existing methods rely on dense LR inputs and per-scene optimization, which restricts the high-frequency priors for constructing HR 3D Gaussian Splatting (3DGS) to those inherited from pretrained 2D super-resolution (2DSR) models. This severely limits reconstruction fidelity, cross-scene generalization, and real-time usability. We propose to reformulate 3DSR as a direct feed-forward mapping from sparse LR views to HR 3DGS representations, enabling the model to autonomously learn 3D-specific high-frequency geometry and appearance from large-scale, multi-scene data. This fundamentally changes how 3DSR acquires high-frequency knowledge and enables robust generalization to unseen scenes. Specifically, we introduce SR3R, a feed-forward framework that directly predicts HR 3DGS representations from sparse LR views via the learned mapping network. To further enhance reconstruction fidelity, we introduce Gaussian offset learning and feature refinement, which stabilize reconstruction and sharpen high-frequency details. SR3R is plug-and-play and can be paired with any feed-forward 3DGS reconstruction backbone: the backbone provides an LR 3DGS scaffold, and SR3R upscales it to an HR 3DGS. Extensive experiments across three 3D benchmarks demonstrate that SR3R surpasses state-of-the-art (SOTA) 3DSR methods and achieves strong zero-shot generalization, even outperforming SOTA per-scene optimization methods on unseen scenes.
Related papers
- MVGSR: Multi-View Consistent 3D Gaussian Super-Resolution via Epipolar Guidance [13.050002358238793]
We introduce Multi-View Consistent 3D Gaussian Splatting Super-Resolution (MVGSR)<n>MVGSR focuses on integrating multi-view information for 3DGS rendering with high-frequency details and enhanced consistency.<n>Our method achieves state-of-the-art performance on both object-centric and scene-level 3DGS SR benchmarks.
arXiv Detail & Related papers (2025-12-17T03:23:12Z) - SplatSuRe: Selective Super-Resolution for Multi-view Consistent 3D Gaussian Splatting [50.36978600976209]
A natural strategy is to apply super-resolution (SR) to low-resolution (LR) input views, but independently enhancing each image introduces multi-view inconsistencies.<n>We propose SplatSuRe, a method that selectively applies SR content only in undersampled regions lacking high-frequency supervision.<n>Across Tanks & Temples, Deep Blending and Mip-NeRF 360, our approach surpasses baselines in both fidelity and perceptual quality.
arXiv Detail & Related papers (2025-12-01T20:08:39Z) - Bridging Diffusion Models and 3D Representations: A 3D Consistent Super-Resolution Framework [51.20764440735875]
We propose 3D Super Resolution (3DSR), a novel 3D Gaussian-splatting-based super-resolution framework.<n>3DSR encourages 3D consistency across views via the use of an explicit 3D Gaussian-splatting-based scene representation.<n>We evaluate 3DSR on MipNeRF360 and LLFF data, demonstrating that it produces high-resolution results that are visually compelling.
arXiv Detail & Related papers (2025-08-06T05:12:02Z) - SparSplat: Fast Multi-View Reconstruction with Generalizable 2D Gaussian Splatting [7.9061560322289335]
We propose an MVS-based learning that regresses 2DGS surface parameters in a feed-forward fashion to perform 3D shape reconstruction and NVS from sparse-view images.<n>The resulting pipeline attains the state-of-the-art results on the DTU 3D reconstruction benchmark in terms of Chamfer distance to ground-truth, as-well as state-of-the-art NVS.
arXiv Detail & Related papers (2025-05-04T16:33:47Z) - GaussHDR: High Dynamic Range Gaussian Splatting via Learning Unified 3D and 2D Local Tone Mapping [17.42021596542516]
We present Gauss, which unifies both 3D and 2D local tone mapping through 3D splatting.<n>We then propose combining the LDR results from both 3D and 2D local tone mapping at the loss level.
arXiv Detail & Related papers (2025-03-13T08:07:43Z) - S3R-GS: Streamlining the Pipeline for Large-Scale Street Scene Reconstruction [58.37746062258149]
3D Gaussian Splatting (3DGS) has reshaped the field of 3D reconstruction, achieving impressive rendering quality and speed.<n>Existing methods suffer from rapidly escalating per-viewpoint reconstruction costs as scene size increases.<n>We propose S3R-GS, a 3DGS framework that Streamlines the pipeline for large-scale Street Scene Reconstruction.
arXiv Detail & Related papers (2025-03-11T09:37:13Z) - StructGS: Adaptive Spherical Harmonics and Rendering Enhancements for Superior 3D Gaussian Splatting [5.759434800012218]
StructGS is a framework that enhances 3D Gaussian Splatting (3DGS) for improved novel-view synthesis in 3D reconstruction.<n>Our framework significantly reduces computational redundancy, enhances detail capture and supports high-resolution rendering from low-resolution inputs.
arXiv Detail & Related papers (2025-03-09T05:39:44Z) - FreeSplatter: Pose-free Gaussian Splatting for Sparse-view 3D Reconstruction [69.63414788486578]
FreeSplatter is a scalable feed-forward framework that generates high-quality 3D Gaussians from uncalibrated sparse-view images.<n>Our approach employs a streamlined transformer architecture where self-attention blocks facilitate information exchange.<n>We develop two specialized variants--for object-centric and scene-level reconstruction--trained on comprehensive datasets.
arXiv Detail & Related papers (2024-12-12T18:52:53Z) - Beyond Gaussians: Fast and High-Fidelity 3D Splatting with Linear Kernels [51.08794269211701]
We introduce 3D Linear Splatting (3DLS), which replaces Gaussian kernels with linear kernels to achieve sharper and more precise results.<n>3DLS demonstrates state-of-the-art fidelity and accuracy, along with a 30% FPS improvement over baseline 3DGS.
arXiv Detail & Related papers (2024-11-19T11:59:54Z) - PUP 3D-GS: Principled Uncertainty Pruning for 3D Gaussian Splatting [59.277480452459315]
We propose a principled sensitivity pruning score that preserves visual fidelity and foreground details at significantly higher compression ratios.<n>We also propose a multi-round prune-refine pipeline that can be applied to any pretrained 3D-GS model without changing its training pipeline.
arXiv Detail & Related papers (2024-06-14T17:53:55Z) - From Chaos to Clarity: 3DGS in the Dark [28.232432162734437]
Noise in unprocessed raw images compromises accuracy of 3D scene representation.
3D Gaussian Splatting (3DGS) is particularly susceptible to this noise.
We introduce a novel self-supervised learning framework designed to reconstruct HDR 3DGS from noisy raw images.
arXiv Detail & Related papers (2024-06-12T15:00:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.