Low Latency Point Cloud Rendering with Learned Splatting
- URL: http://arxiv.org/abs/2409.16504v1
- Date: Tue, 24 Sep 2024 23:26:07 GMT
- Title: Low Latency Point Cloud Rendering with Learned Splatting
- Authors: Yueyu Hu, Ran Gong, Qi Sun, Yao Wang,
- Abstract summary: High-quality rendering of point clouds is challenging because of the point sparsity and irregularity.
Existing rendering solutions lack in either quality or speed.
We present a framework that unlocks interactive, free-viewing and high-fidelity point cloud rendering.
- Score: 24.553459204476432
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Point cloud is a critical 3D representation with many emerging applications. Because of the point sparsity and irregularity, high-quality rendering of point clouds is challenging and often requires complex computations to recover the continuous surface representation. On the other hand, to avoid visual discomfort, the motion-to-photon latency has to be very short, under 10 ms. Existing rendering solutions lack in either quality or speed. To tackle these challenges, we present a framework that unlocks interactive, free-viewing and high-fidelity point cloud rendering. We train a generic neural network to estimate 3D elliptical Gaussians from arbitrary point clouds and use differentiable surface splatting to render smooth texture and surface normal for arbitrary views. Our approach does not require per-scene optimization, and enable real-time rendering of dynamic point cloud. Experimental results demonstrate the proposed solution enjoys superior visual quality and speed, as well as generalizability to different scene content and robustness to compression artifacts. The code is available at https://github.com/huzi96/gaussian-pcloud-render .
Related papers
- Bits-to-Photon: End-to-End Learned Scalable Point Cloud Compression for Direct Rendering [10.662358423042274]
We develop a point cloud compression scheme that generates a bit stream that can be directly decoded to renderable 3D Gaussians.
The proposed scheme generates a scalable bit stream, allowing multiple levels of details at different bit-rate ranges.
Our method supports real-time color decoding and rendering of high quality point clouds, thus paving the way for interactive 3D streaming applications with free view points.
arXiv Detail & Related papers (2024-06-09T20:58:32Z) - GPN: Generative Point-based NeRF [0.65268245109828]
We propose using Generative Point-based NeRF (GPN) to reconstruct and repair a partial cloud.
The repaired point cloud can achieve multi-view consistency with the captured images at high spatial resolution.
arXiv Detail & Related papers (2024-04-12T08:14:17Z) - TriVol: Point Cloud Rendering via Triple Volumes [57.305748806545026]
We present a dense while lightweight 3D representation, named TriVol, that can be combined with NeRF to render photo-realistic images from point clouds.
Our framework has excellent generalization ability to render a category of scenes/objects without fine-tuning.
arXiv Detail & Related papers (2023-03-29T06:34:12Z) - Point2Pix: Photo-Realistic Point Cloud Rendering via Neural Radiance
Fields [63.21420081888606]
Recent Radiance Fields and extensions are proposed to synthesize realistic images from 2D input.
We present Point2Pix as a novel point to link the 3D sparse point clouds with 2D dense image pixels.
arXiv Detail & Related papers (2023-03-29T06:26:55Z) - Ponder: Point Cloud Pre-training via Neural Rendering [93.34522605321514]
We propose a novel approach to self-supervised learning of point cloud representations by differentiable neural encoders.
The learned point-cloud can be easily integrated into various downstream tasks, including not only high-level rendering tasks like 3D detection and segmentation, but low-level tasks like 3D reconstruction and image rendering.
arXiv Detail & Related papers (2022-12-31T08:58:39Z) - GRASP-Net: Geometric Residual Analysis and Synthesis for Point Cloud
Compression [16.98171403698783]
We propose a heterogeneous approach with deep learning for lossy point cloud geometry compression.
Specifically, a point-based network is applied to convert the erratic local details to latent features residing on the coarse point cloud.
arXiv Detail & Related papers (2022-09-09T17:09:02Z) - IDEA-Net: Dynamic 3D Point Cloud Interpolation via Deep Embedding
Alignment [58.8330387551499]
We formulate the problem as estimation of point-wise trajectories (i.e., smooth curves)
We propose IDEA-Net, an end-to-end deep learning framework, which disentangles the problem under the assistance of the explicitly learned temporal consistency.
We demonstrate the effectiveness of our method on various point cloud sequences and observe large improvement over state-of-the-art methods both quantitatively and visually.
arXiv Detail & Related papers (2022-03-22T10:14:08Z) - Z2P: Instant Rendering of Point Clouds [104.1186026323896]
We present a technique for rendering point clouds using a neural network.
Existing point rendering techniques either use splatting, or first reconstruct a surface mesh that can then be rendered.
arXiv Detail & Related papers (2021-05-30T13:58:24Z) - HyperPocket: Generative Point Cloud Completion [19.895219420937938]
We introduce a novel autoencoder-based architecture called HyperPocket that disentangles latent representations.
We leverage a hypernetwork paradigm to fill the spaces, dubbed pockets, that are left by the missing object parts.
Our method offers competitive performances to the other state-of-the-art models.
arXiv Detail & Related papers (2021-02-11T12:30:03Z) - Pseudo-LiDAR Point Cloud Interpolation Based on 3D Motion Representation
and Spatial Supervision [68.35777836993212]
We propose a Pseudo-LiDAR point cloud network to generate temporally and spatially high-quality point cloud sequences.
By exploiting the scene flow between point clouds, the proposed network is able to learn a more accurate representation of the 3D spatial motion relationship.
arXiv Detail & Related papers (2020-06-20T03:11:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.