PFGS: High Fidelity Point Cloud Rendering via Feature Splatting
- URL: http://arxiv.org/abs/2407.03857v1
- Date: Thu, 4 Jul 2024 11:42:54 GMT
- Title: PFGS: High Fidelity Point Cloud Rendering via Feature Splatting
- Authors: Jiaxu Wang, Ziyi Zhang, Junhao He, Renjing Xu,
- Abstract summary: We propose a novel framework to render high-quality images from sparse points.
This method first attempts to bridge the 3D Gaussian Splatting and point cloud rendering.
Experiments on different benchmarks show the superiority of our method in terms of rendering qualities and the necessities of our main components.
- Score: 5.866747029417274
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Rendering high-fidelity images from sparse point clouds is still challenging. Existing learning-based approaches suffer from either hole artifacts, missing details, or expensive computations. In this paper, we propose a novel framework to render high-quality images from sparse points. This method first attempts to bridge the 3D Gaussian Splatting and point cloud rendering, which includes several cascaded modules. We first use a regressor to estimate Gaussian properties in a point-wise manner, the estimated properties are used to rasterize neural feature descriptors into 2D planes which are extracted from a multiscale extractor. The projected feature volume is gradually decoded toward the final prediction via a multiscale and progressive decoder. The whole pipeline experiences a two-stage training and is driven by our well-designed progressive and multiscale reconstruction loss. Experiments on different benchmarks show the superiority of our method in terms of rendering qualities and the necessities of our main components.
Related papers
- Point Cloud Unsupervised Pre-training via 3D Gaussian Splatting [7.070581940661794]
We propose an efficient framework named GS$3$ to learn point cloud representation.
Specifically, we back-project the input RGB-D images into 3D space and use a point cloud encoder to extract point-wise features.
arXiv Detail & Related papers (2024-11-27T16:11:45Z) - Few-shot Novel View Synthesis using Depth Aware 3D Gaussian Splatting [0.0]
3D Gaussian splatting has surpassed neural radiance field methods in novel view synthesis.
It produces a high-quality rendering with a lot of input views, but its performance drops significantly when only a few views are available.
We propose a depth-aware Gaussian splatting method for few-shot novel view synthesis.
arXiv Detail & Related papers (2024-10-14T20:42:30Z) - DistillNeRF: Perceiving 3D Scenes from Single-Glance Images by Distilling Neural Fields and Foundation Model Features [65.8738034806085]
DistillNeRF is a self-supervised learning framework for understanding 3D environments in autonomous driving scenes.
Our method is a generalizable feedforward model that predicts a rich neural scene representation from sparse, single-frame multi-view camera inputs.
arXiv Detail & Related papers (2024-06-17T21:15:13Z) - PUP 3D-GS: Principled Uncertainty Pruning for 3D Gaussian Splatting [59.277480452459315]
We propose a principled sensitivity pruning score that preserves visual fidelity and foreground details at significantly higher compression ratios.
We also propose a multi-round prune-refine pipeline that can be applied to any pretrained 3D-GS model without changing its training pipeline.
arXiv Detail & Related papers (2024-06-14T17:53:55Z) - TRIPS: Trilinear Point Splatting for Real-Time Radiance Field Rendering [6.142272540492937]
We present TRIPS (Trilinear Splatting), an approach that combines ideas from both Gaussian Splatting and ADOP.
Our evaluation demonstrate that TRIPS surpasses existing state-of-the-art methods in terms of rendering quality.
This performance extends to challenging scenarios, such as scenes featuring intricate geometry, expansive landscapes, and auto-exposed footage.
arXiv Detail & Related papers (2024-01-11T16:06:36Z) - Feature 3DGS: Supercharging 3D Gaussian Splatting to Enable Distilled Feature Fields [54.482261428543985]
Methods that use Neural Radiance fields are versatile for traditional tasks such as novel view synthesis.
3D Gaussian splatting has shown state-of-the-art performance on real-time radiance field rendering.
We propose architectural and training changes to efficiently avert this problem.
arXiv Detail & Related papers (2023-12-06T00:46:30Z) - TriVol: Point Cloud Rendering via Triple Volumes [57.305748806545026]
We present a dense while lightweight 3D representation, named TriVol, that can be combined with NeRF to render photo-realistic images from point clouds.
Our framework has excellent generalization ability to render a category of scenes/objects without fine-tuning.
arXiv Detail & Related papers (2023-03-29T06:34:12Z) - Ponder: Point Cloud Pre-training via Neural Rendering [93.34522605321514]
We propose a novel approach to self-supervised learning of point cloud representations by differentiable neural encoders.
The learned point-cloud can be easily integrated into various downstream tasks, including not only high-level rendering tasks like 3D detection and segmentation, but low-level tasks like 3D reconstruction and image rendering.
arXiv Detail & Related papers (2022-12-31T08:58:39Z) - Learning Local Displacements for Point Cloud Completion [93.54286830844134]
We propose a novel approach aimed at object and semantic scene completion from a partial scan represented as a 3D point cloud.
Our architecture relies on three novel layers that are used successively within an encoder-decoder structure.
We evaluate both architectures on object and indoor scene completion tasks, achieving state-of-the-art performance.
arXiv Detail & Related papers (2022-03-30T18:31:37Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.