Boosting Point Clouds Rendering via Radiance Mapping
- URL: http://arxiv.org/abs/2210.15107v1
- Date: Thu, 27 Oct 2022 01:25:57 GMT
- Title: Boosting Point Clouds Rendering via Radiance Mapping
- Authors: Xiaoyang Huang, Yi Zhang, Bingbing Ni, Teng Li, Kai Chen, Wenjun Zhang
- Abstract summary: We focus on boosting the image quality of point clouds rendering with a compact model design.
We simplify the NeRF representation to a spatial mapping function which only requires single evaluation per pixel.
Our method achieves the state-of-the-art rendering on point clouds, outperforming prior works by notable margins.
- Score: 49.24193509772339
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Recent years we have witnessed rapid development in NeRF-based image
rendering due to its high quality. However, point clouds rendering is somehow
less explored. Compared to NeRF-based rendering which suffers from dense
spatial sampling, point clouds rendering is naturally less computation
intensive, which enables its deployment in mobile computing device. In this
work, we focus on boosting the image quality of point clouds rendering with a
compact model design. We first analyze the adaption of the volume rendering
formulation on point clouds. Based on the analysis, we simplify the NeRF
representation to a spatial mapping function which only requires single
evaluation per pixel. Further, motivated by ray marching, we rectify the the
noisy raw point clouds to the estimated intersection between rays and surfaces
as queried coordinates, which could avoid spatial frequency collapse and
neighbor point disturbance. Composed of rasterization, spatial mapping and the
refinement stages, our method achieves the state-of-the-art performance on
point clouds rendering, outperforming prior works by notable margins, with a
smaller model size. We obtain a PSNR of 31.74 on NeRF-Synthetic, 25.88 on
ScanNet and 30.81 on DTU. Code and data would be released soon.
Related papers
- Fast Learning of Signed Distance Functions from Noisy Point Clouds via Noise to Noise Mapping [54.38209327518066]
Learning signed distance functions from point clouds is an important task in 3D computer vision.
We propose to learn SDFs via a noise to noise mapping, which does not require any clean point cloud or ground truth supervision.
Our novelty lies in the noise to noise mapping which can infer a highly accurate SDF of a single object or scene from its multiple or even single noisy observations.
arXiv Detail & Related papers (2024-07-04T03:35:02Z) - GPN: Generative Point-based NeRF [0.65268245109828]
We propose using Generative Point-based NeRF (GPN) to reconstruct and repair a partial cloud.
The repaired point cloud can achieve multi-view consistency with the captured images at high spatial resolution.
arXiv Detail & Related papers (2024-04-12T08:14:17Z) - PointNeRF++: A multi-scale, point-based Neural Radiance Field [31.23973383531481]
Point clouds offer an attractive source of information to complement images in neural scene representations.
NeRF rendering methods based on point clouds do not perform well when the point cloud quality is low.
We present a representation that aggregates point clouds at multiple scale levels with sparse voxel grids at different resolutions.
We validate our method on the NeRF Synthetic, ScanNet, and KITTI-360 datasets, outperforming the state of the art.
arXiv Detail & Related papers (2023-12-04T21:43:00Z) - Pointersect: Neural Rendering with Cloud-Ray Intersection [30.485621062087585]
We propose a novel method that renders point clouds as if they are surfaces.
The proposed method is differentiable and requires no scene-specific optimization.
arXiv Detail & Related papers (2023-04-24T18:36:49Z) - Point2Pix: Photo-Realistic Point Cloud Rendering via Neural Radiance
Fields [63.21420081888606]
Recent Radiance Fields and extensions are proposed to synthesize realistic images from 2D input.
We present Point2Pix as a novel point to link the 3D sparse point clouds with 2D dense image pixels.
arXiv Detail & Related papers (2023-03-29T06:26:55Z) - A Conditional Point Diffusion-Refinement Paradigm for 3D Point Cloud
Completion [69.32451612060214]
Real-scanned 3D point clouds are often incomplete, and it is important to recover complete point clouds for downstream applications.
Most existing point cloud completion methods use Chamfer Distance (CD) loss for training.
We propose a novel Point Diffusion-Refinement (PDR) paradigm for point cloud completion.
arXiv Detail & Related papers (2021-12-07T06:59:06Z) - Learning Neural Light Fields with Ray-Space Embedding Networks [51.88457861982689]
We propose a novel neural light field representation that is compact and directly predicts integrated radiance along rays.
Our method achieves state-of-the-art quality on dense forward-facing datasets such as the Stanford Light Field dataset.
arXiv Detail & Related papers (2021-12-02T18:59:51Z) - SSPU-Net: Self-Supervised Point Cloud Upsampling via Differentiable
Rendering [21.563862632172363]
We propose a self-supervised point cloud upsampling network (SSPU-Net) to generate dense point clouds without using ground truth.
To achieve this, we exploit the consistency between the input sparse point cloud and generated dense point cloud for the shapes and rendered images.
arXiv Detail & Related papers (2021-08-01T13:26:01Z) - Pseudo-LiDAR Point Cloud Interpolation Based on 3D Motion Representation
and Spatial Supervision [68.35777836993212]
We propose a Pseudo-LiDAR point cloud network to generate temporally and spatially high-quality point cloud sequences.
By exploiting the scene flow between point clouds, the proposed network is able to learn a more accurate representation of the 3D spatial motion relationship.
arXiv Detail & Related papers (2020-06-20T03:11:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.