COSMOS: Coherent Supergaussian Modeling with Spatial Priors for Sparse-View 3D Splatting
- URL: http://arxiv.org/abs/2602.06044v1
- Date: Wed, 17 Dec 2025 04:55:15 GMT
- Title: COSMOS: Coherent Supergaussian Modeling with Spatial Priors for Sparse-View 3D Splatting
- Authors: Chaeyoung Jeong, Kwangsu Kim,
- Abstract summary: 3D Gaussian Splatting (3DGS) has recently emerged as a promising approach for 3D reconstruction.<n>We propose Coherent supergaussian Modeling with Spatial Priors (COSMOS) to address this issue.<n>Our experiments on Blender and DTU show that COSMOS surpasses state-of-the-art methods in sparse-view settings without any external depth supervision.
- Score: 2.549594916296063
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: 3D Gaussian Splatting (3DGS) has recently emerged as a promising approach for 3D reconstruction, providing explicit, point-based representations and enabling high-quality real time rendering. However, when trained with sparse input views, 3DGS suffers from overfitting and structural degradation, leading to poor generalization on novel views. This limitation arises from its optimization relying solely on photometric loss without incorporating any 3D structure priors. To address this issue, we propose Coherent supergaussian Modeling with Spatial Priors (COSMOS). Inspired by the concept of superpoints from 3D segmentation, COSMOS introduces 3D structure priors by newly defining supergaussian groupings of Gaussians based on local geometric cues and appearance features. To this end, COSMOS applies inter group global self-attention across supergaussian groups and sparse local attention among individual Gaussians, enabling the integration of global and local spatial information. These structure-aware features are then used for predicting Gaussian attributes, facilitating more consistent 3D reconstructions. Furthermore, by leveraging supergaussian-based grouping, COSMOS enforces an intra-group positional regularization to maintain structural coherence and suppress floaters, thereby enhancing training stability under sparse-view conditions. Our experiments on Blender and DTU show that COSMOS surpasses state-of-the-art methods in sparse-view settings without any external depth supervision.
Related papers
- Joint Semantic and Rendering Enhancements in 3D Gaussian Modeling with Anisotropic Local Encoding [86.55824709875598]
We propose a joint enhancement framework for 3D semantic Gaussian modeling that synergizes both semantic and rendering branches.<n>Unlike conventional point cloud shape encoding, we introduce an anisotropic 3D Gaussian Chebyshev descriptor to capture fine-grained 3D shape details.<n>We employ a cross-scene knowledge transfer module to continuously update learned shape patterns, enabling faster convergence and robust representations.
arXiv Detail & Related papers (2026-01-05T18:33:50Z) - SaLon3R: Structure-aware Long-term Generalizable 3D Reconstruction from Unposed Images [31.94503176488054]
SaLon3R is a novel framework for Structure-aware, Long-term 3DGS Reconstruction.<n>It is capable of reconstructing over 50 views in over 10 FPS, with 50% to 90% redundancy removal.<n>Our approach effectively resolves artifacts and prunes the redundant 3DGS in a single feed-forward pass.
arXiv Detail & Related papers (2025-10-16T18:37:10Z) - Visibility-Aware Densification for 3D Gaussian Splatting in Dynamic Urban Scenes [7.253732091582086]
VAD-GS is a 3DGS framework tailored for geometry recovery in challenging urban scenes.<n>Our method identifies unreliable geometry structures via voxel-based visibility reasoning.<n>It selects informative supporting views through diversity-aware view selection, and recovers missing structures via patch matching-based stereo reconstruction.
arXiv Detail & Related papers (2025-10-10T13:22:12Z) - Intern-GS: Vision Model Guided Sparse-View 3D Gaussian Splatting [95.61137026932062]
Intern-GS is a novel approach to enhance the process of sparse-view Gaussian splatting.<n>We show that Intern-GS achieves state-of-the-art rendering quality across diverse datasets.
arXiv Detail & Related papers (2025-05-27T05:17:49Z) - TSGaussian: Semantic and Depth-Guided Target-Specific Gaussian Splatting from Sparse Views [18.050257821756148]
TSGaussian is a novel framework that combines semantic constraints with depth priors to avoid geometry degradation in novel view synthesis tasks.<n>Our approach prioritizes computational resources on designated targets while minimizing background allocation.<n>Extensive experiments demonstrate that TSGaussian outperforms state-of-the-art methods on three standard datasets.
arXiv Detail & Related papers (2024-12-13T11:26:38Z) - MonoGSDF: Exploring Monocular Geometric Cues for Gaussian Splatting-Guided Implicit Surface Reconstruction [86.87464903285208]
We introduce MonoGSDF, a novel method that couples primitives with a neural Signed Distance Field (SDF) for high-quality reconstruction.<n>To handle arbitrary-scale scenes, we propose a scaling strategy for robust generalization.<n>Experiments on real-world datasets outperforms prior methods while maintaining efficiency.
arXiv Detail & Related papers (2024-11-25T20:07:07Z) - PF3plat: Pose-Free Feed-Forward 3D Gaussian Splatting [54.7468067660037]
PF3plat sets a new state-of-the-art across all benchmarks, supported by comprehensive ablation studies validating our design choices.<n>Our framework capitalizes on fast speed, scalability, and high-quality 3D reconstruction and view synthesis capabilities of 3DGS.
arXiv Detail & Related papers (2024-10-29T15:28:15Z) - MCGS: Multiview Consistency Enhancement for Sparse-View 3D Gaussian Radiance Fields [100.90743697473232]
Radiance fields represented by 3D Gaussians excel at synthesizing novel views, offering both high training efficiency and fast rendering.<n>Existing methods often incorporate depth priors from dense estimation networks but overlook the inherent multi-view consistency in input images.<n>We propose a view synthesis framework based on 3D Gaussian Splatting, enabling scene reconstruction from sparse views.
arXiv Detail & Related papers (2024-10-15T08:39:05Z) - GaussianRoom: Improving 3D Gaussian Splatting with SDF Guidance and Monocular Cues for Indoor Scene Reconstruction [5.112375652774415]
We propose a unified optimization framework that integrates neural signed distance fields (SDFs) with 3DGS for accurate geometry reconstruction and real-time rendering.<n>Our method achieves state-of-the-art performance in both surface reconstruction and novel view synthesis.
arXiv Detail & Related papers (2024-05-30T03:46:59Z) - Gaussian Opacity Fields: Efficient Adaptive Surface Reconstruction in Unbounded Scenes [50.92217884840301]
Gaussian Opacity Fields (GOF) is a novel approach for efficient, high-quality, and adaptive surface reconstruction in scenes.
GOF is derived from ray-tracing-based volume rendering of 3D Gaussians.
GOF surpasses existing 3DGS-based methods in surface reconstruction and novel view synthesis.
arXiv Detail & Related papers (2024-04-16T17:57:19Z) - Spec-Gaussian: Anisotropic View-Dependent Appearance for 3D Gaussian Splatting [55.71424195454963]
Spec-Gaussian is an approach that utilizes an anisotropic spherical Gaussian appearance field instead of spherical harmonics.
Our experimental results demonstrate that our method surpasses existing approaches in terms of rendering quality.
This improvement extends the applicability of 3D GS to handle intricate scenarios with specular and anisotropic surfaces.
arXiv Detail & Related papers (2024-02-24T17:22:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.