SplatSuRe: Selective Super-Resolution for Multi-view Consistent 3D Gaussian Splatting
- URL: http://arxiv.org/abs/2512.02172v1
- Date: Mon, 01 Dec 2025 20:08:39 GMT
- Title: SplatSuRe: Selective Super-Resolution for Multi-view Consistent 3D Gaussian Splatting
- Authors: Pranav Asthana, Alex Hanson, Allen Tu, Tom Goldstein, Matthias Zwicker, Amitabh Varshney,
- Abstract summary: A natural strategy is to apply super-resolution (SR) to low-resolution (LR) input views, but independently enhancing each image introduces multi-view inconsistencies.<n>We propose SplatSuRe, a method that selectively applies SR content only in undersampled regions lacking high-frequency supervision.<n>Across Tanks & Temples, Deep Blending and Mip-NeRF 360, our approach surpasses baselines in both fidelity and perceptual quality.
- Score: 50.36978600976209
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: 3D Gaussian Splatting (3DGS) enables high-quality novel view synthesis, motivating interest in generating higher-resolution renders than those available during training. A natural strategy is to apply super-resolution (SR) to low-resolution (LR) input views, but independently enhancing each image introduces multi-view inconsistencies, leading to blurry renders. Prior methods attempt to mitigate these inconsistencies through learned neural components, temporally consistent video priors, or joint optimization on LR and SR views, but all uniformly apply SR across every image. In contrast, our key insight is that close-up LR views may contain high-frequency information for regions also captured in more distant views, and that we can use the camera pose relative to scene geometry to inform where to add SR content. Building from this insight, we propose SplatSuRe, a method that selectively applies SR content only in undersampled regions lacking high-frequency supervision, yielding sharper and more consistent results. Across Tanks & Temples, Deep Blending and Mip-NeRF 360, our approach surpasses baselines in both fidelity and perceptual quality. Notably, our gains are most significant in localized foreground regions where higher detail is desired.
Related papers
- MVGSR: Multi-View Consistent 3D Gaussian Super-Resolution via Epipolar Guidance [13.050002358238793]
We introduce Multi-View Consistent 3D Gaussian Splatting Super-Resolution (MVGSR)<n>MVGSR focuses on integrating multi-view information for 3DGS rendering with high-frequency details and enhanced consistency.<n>Our method achieves state-of-the-art performance on both object-centric and scene-level 3DGS SR benchmarks.
arXiv Detail & Related papers (2025-12-17T03:23:12Z) - SRSplat: Feed-Forward Super-Resolution Gaussian Splatting from Sparse Multi-View Images [22.87137082795346]
We propose textbfSRSplat, a feed-forward framework that reconstructs high-resolution 3D scenes from only a few LR views.<n>Our main insight is to compensate for the deficiency of texture information by jointly leveraging external high-quality reference images and internal texture cues.
arXiv Detail & Related papers (2025-11-15T05:17:44Z) - GaussianLens: Localized High-Resolution Reconstruction via On-Demand Gaussian Densification [77.40235389999]
We propose a generalizable network that densifies the initial 3DGS to capture fine details in a user-specified local region of interest.<n>Experiments demonstrate our method's superior performance in local fine detail reconstruction and strong scalability to images of up to $1024times1024$ resolution.
arXiv Detail & Related papers (2025-09-29T23:58:49Z) - Generalized and Efficient 2D Gaussian Splatting for Arbitrary-scale Super-Resolution [10.074968164380314]
Implicit Neural Representations (INR) have been successfully employed for Arbitrary-scale Super-Resolution (ASR)<n>However, INR-based models need to query the multi-layer perceptron module numerous times and render a pixel in each query.<n>GS has shown its advantages over INR in both visual quality and rendering speed in 3D tasks, which motivates us to explore whether GS can be employed for the ASR task.
arXiv Detail & Related papers (2025-01-12T15:14:58Z) - ASSR-NeRF: Arbitrary-Scale Super-Resolution on Voxel Grid for High-Quality Radiance Fields Reconstruction [27.21399221644529]
NeRF-based methods reconstruct 3D scenes by building a radiance field with implicit or explicit representations.
We propose Arbitrary-Scale Super-Resolution NeRF (ASSR-NeRF), a novel framework for super-resolution novel view synthesis (SRNVS)
arXiv Detail & Related papers (2024-06-28T17:22:33Z) - FreeSplat: Generalizable 3D Gaussian Splatting Towards Free-View Synthesis of Indoor Scenes [50.534213038479926]
FreeSplat is capable of reconstructing geometrically consistent 3D scenes from long sequence input towards free-view synthesis.
We propose a simple but effective free-view training strategy that ensures robust view synthesis across broader view range regardless of the number of views.
arXiv Detail & Related papers (2024-05-28T08:40:14Z) - SRGS: Super-Resolution 3D Gaussian Splatting [14.26021476067791]
We propose Super-Resolution 3D Gaussian Splatting (SRGS) to perform the optimization in a high-resolution (HR) space.
The sub-pixel constraint is introduced for the increased viewpoints in HR space, exploiting the sub-pixel cross-view information of the multiple low-resolution (LR) views.
Our method achieves high rendering quality on HRNVS only with LR inputs, outperforming state-of-the-art methods on challenging datasets such as Mip-NeRF 360 and Tanks & Temples.
arXiv Detail & Related papers (2024-04-16T06:58:30Z) - CiaoSR: Continuous Implicit Attention-in-Attention Network for
Arbitrary-Scale Image Super-Resolution [158.2282163651066]
This paper proposes a continuous implicit attention-in-attention network, called CiaoSR.
We explicitly design an implicit attention network to learn the ensemble weights for the nearby local features.
We embed a scale-aware attention in this implicit attention network to exploit additional non-local information.
arXiv Detail & Related papers (2022-12-08T15:57:46Z) - Best-Buddy GANs for Highly Detailed Image Super-Resolution [71.13466303340192]
We consider the single image super-resolution (SISR) problem, where a high-resolution (HR) image is generated based on a low-resolution (LR) input.
Most methods along this line rely on a predefined single-LR-single-HR mapping, which is not flexible enough for the SISR task.
We propose best-buddy GANs (Beby-GAN) for rich-detail SISR. Relaxing the immutable one-to-one constraint, we allow the estimated patches to dynamically seek the best supervision.
arXiv Detail & Related papers (2021-03-29T02:58:27Z) - Hyperspectral Image Super-resolution via Deep Progressive Zero-centric
Residual Learning [62.52242684874278]
Cross-modality distribution of spatial and spectral information makes the problem challenging.
We propose a novel textitlightweight deep neural network-based framework, namely PZRes-Net.
Our framework learns a high resolution and textitzero-centric residual image, which contains high-frequency spatial details of the scene.
arXiv Detail & Related papers (2020-06-18T06:32:11Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.