Sparse Point Cloud Patches Rendering via Splitting 2D Gaussians
- URL: http://arxiv.org/abs/2505.09413v1
- Date: Wed, 14 May 2025 14:10:09 GMT
- Title: Sparse Point Cloud Patches Rendering via Splitting 2D Gaussians
- Authors: Ma Changfeng, Bi Ran, Guo Jie, Wang Chongjun, Guo Yanwen,
- Abstract summary: Current learning-based methods predict NeRF or 3D Gaussians from point clouds to achieve photo-realistic rendering.<n>We introduce a novel point cloud rendering method by predicting 2D Gaussians from point clouds.<n>We conduct extensive experiments on various datasets, and the results demonstrate the superiority and generalization of our method.
- Score: 0.19972837513980318
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Current learning-based methods predict NeRF or 3D Gaussians from point clouds to achieve photo-realistic rendering but still depend on categorical priors, dense point clouds, or additional refinements. Hence, we introduce a novel point cloud rendering method by predicting 2D Gaussians from point clouds. Our method incorporates two identical modules with an entire-patch architecture enabling the network to be generalized to multiple datasets. The module normalizes and initializes the Gaussians utilizing the point cloud information including normals, colors and distances. Then, splitting decoders are employed to refine the initial Gaussians by duplicating them and predicting more accurate results, making our methodology effectively accommodate sparse point clouds as well. Once trained, our approach exhibits direct generalization to point clouds across different categories. The predicted Gaussians are employed directly for rendering without additional refinement on the rendered images, retaining the benefits of 2D Gaussians. We conduct extensive experiments on various datasets, and the results demonstrate the superiority and generalization of our method, which achieves SOTA performance. The code is available at https://github.com/murcherful/GauPCRender}{https://github.com/murcherful/GauPCRender.
Related papers
- GAP: Gaussianize Any Point Clouds with Text Guidance [29.002390913738203]
We propose GAP, a novel approach that gaussianizes raw point clouds into high-fidelity 3D Gaussians with text guidance.<n>To ensure geometric accuracy, we introduce a surface-anchoring mechanism that effectively constrains Gaussians to lie on the surfaces of 3D shapes.<n>We evaluate GAP on the Point-to-Gaussian generation task across varying complexity levels, from synthetic point clouds to challenging real-world scans, and even large-scale scenes.
arXiv Detail & Related papers (2025-08-07T17:59:27Z) - UniPre3D: Unified Pre-training of 3D Point Cloud Models with Cross-Modal Gaussian Splatting [64.31900521467362]
No existing pre-training method is equally effective for both object- and scene-level point clouds.<n>We introduce UniPre3D, the first unified pre-training method that can be seamlessly applied to point clouds of any scale and 3D models of any architecture.
arXiv Detail & Related papers (2025-06-11T17:23:21Z) - GPS-Gaussian+: Generalizable Pixel-wise 3D Gaussian Splatting for Real-Time Human-Scene Rendering from Sparse Views [67.34073368933814]
We propose a generalizable Gaussian Splatting approach for high-resolution image rendering under a sparse-view camera setting.
We train our Gaussian parameter regression module on human-only data or human-scene data, jointly with a depth estimation module to lift 2D parameter maps to 3D space.
Experiments on several datasets demonstrate that our method outperforms state-of-the-art methods while achieving an exceeding rendering speed.
arXiv Detail & Related papers (2024-11-18T08:18:44Z) - UniGS: Modeling Unitary 3D Gaussians for Novel View Synthesis from Sparse-view Images [20.089890859122168]
We introduce UniGS, a novel 3D Gaussian reconstruction and novel view synthesis model.<n>UniGS predicts a high-fidelity representation of 3D Gaussians from arbitrary number of posed sparse-view images.
arXiv Detail & Related papers (2024-10-17T03:48:02Z) - P2P-Bridge: Diffusion Bridges for 3D Point Cloud Denoising [81.92854168911704]
We tackle the task of point cloud denoising through a novel framework that adapts Diffusion Schr"odinger bridges to points clouds.
Experiments on object datasets show that P2P-Bridge achieves significant improvements over existing methods.
arXiv Detail & Related papers (2024-08-29T08:00:07Z) - ShapeSplat: A Large-scale Dataset of Gaussian Splats and Their Self-Supervised Pretraining [104.34751911174196]
We build a large-scale dataset of 3DGS using ShapeNet and ModelNet datasets.
Our dataset ShapeSplat consists of 65K objects from 87 unique categories.
We introduce textbftextitGaussian-MAE, which highlights the unique benefits of representation learning from Gaussian parameters.
arXiv Detail & Related papers (2024-08-20T14:49:14Z) - PFGS: High Fidelity Point Cloud Rendering via Feature Splatting [5.866747029417274]
We propose a novel framework to render high-quality images from sparse points.
This method first attempts to bridge the 3D Gaussian Splatting and point cloud rendering.
Experiments on different benchmarks show the superiority of our method in terms of rendering qualities and the necessities of our main components.
arXiv Detail & Related papers (2024-07-04T11:42:54Z) - LeanGaussian: Breaking Pixel or Point Cloud Correspondence in Modeling 3D Gaussians [11.71048049090424]
We introduce LeanGaussian, a novel approach that treats each query in deformable Transformer as one 3D Gaussian ellipsoid.<n>We leverage deformable decoder to iteratively refine the Gaussians layer-by-layer with the image features as keys and values.<n>Our approach outperforms prior methods by approximately 6.1%, achieving a PSNR of 25.44 and 22.36, respectively.
arXiv Detail & Related papers (2024-04-25T04:18:59Z) - ComPC: Completing a 3D Point Cloud with 2D Diffusion Priors [52.72867922938023]
3D point clouds directly collected from objects through sensors are often incomplete due to self-occlusion.<n>We propose a test-time framework for completing partial point clouds across unseen categories without any requirement for training.
arXiv Detail & Related papers (2024-04-10T08:02:17Z) - GaussianObject: High-Quality 3D Object Reconstruction from Four Views with Gaussian Splatting [82.29476781526752]
Reconstructing and rendering 3D objects from highly sparse views is of critical importance for promoting applications of 3D vision techniques.
GaussianObject is a framework to represent and render the 3D object with Gaussian splatting that achieves high rendering quality with only 4 input images.
GaussianObject is evaluated on several challenging datasets, including MipNeRF360, OmniObject3D, OpenIllumination, and our-collected unposed images.
arXiv Detail & Related papers (2024-02-15T18:42:33Z) - Lifting 2D Object Locations to 3D by Discounting LiDAR Outliers across
Objects and Views [70.1586005070678]
We present a system for automatically converting 2D mask object predictions and raw LiDAR point clouds into full 3D bounding boxes of objects.
Our method significantly outperforms previous work despite the fact that those methods use significantly more complex pipelines, 3D models and additional human-annotated external sources of prior information.
arXiv Detail & Related papers (2021-09-16T13:01:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.