Pointersect: Neural Rendering with Cloud-Ray Intersection
- URL: http://arxiv.org/abs/2304.12390v1
- Date: Mon, 24 Apr 2023 18:36:49 GMT
- Title: Pointersect: Neural Rendering with Cloud-Ray Intersection
- Authors: Jen-Hao Rick Chang, Wei-Yu Chen, Anurag Ranjan, Kwang Moo Yi, Oncel
Tuzel
- Abstract summary: We propose a novel method that renders point clouds as if they are surfaces.
The proposed method is differentiable and requires no scene-specific optimization.
- Score: 30.485621062087585
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We propose a novel method that renders point clouds as if they are surfaces.
The proposed method is differentiable and requires no scene-specific
optimization. This unique capability enables, out-of-the-box, surface normal
estimation, rendering room-scale point clouds, inverse rendering, and ray
tracing with global illumination. Unlike existing work that focuses on
converting point clouds to other representations--e.g., surfaces or implicit
functions--our key idea is to directly infer the intersection of a light ray
with the underlying surface represented by the given point cloud. Specifically,
we train a set transformer that, given a small number of local neighbor points
along a light ray, provides the intersection point, the surface normal, and the
material blending weights, which are used to render the outcome of this light
ray. Localizing the problem into small neighborhoods enables us to train a
model with only 48 meshes and apply it to unseen point clouds. Our model
achieves higher estimation accuracy than state-of-the-art surface
reconstruction and point-cloud rendering methods on three test sets. When
applied to room-scale point clouds, without any scene-specific optimization,
the model achieves competitive quality with the state-of-the-art novel-view
rendering methods. Moreover, we demonstrate ability to render and manipulate
Lidar-scanned point clouds such as lighting control and object insertion.
Related papers
- INPC: Implicit Neural Point Clouds for Radiance Field Rendering [5.64500060725726]
We introduce a new approach for reconstruction and novel-view synthesis of real-world scenes.
We propose a hybrid scene representation, which implicitly encodes a point cloud in a continuous octree-based probability field and a multi-resolution hash grid.
Our method achieves fast inference at interactive frame rates, and can extract explicit point clouds to further enhance performance.
arXiv Detail & Related papers (2024-03-25T15:26:32Z) - Quadric Representations for LiDAR Odometry, Mapping and Localization [93.24140840537912]
Current LiDAR odometry, mapping and localization methods leverage point-wise representations of 3D scenes.
We propose a novel method of describing scenes using quadric surfaces, which are far more compact representations of 3D objects.
Our method maintains low latency and memory utility while achieving competitive, and even superior, accuracy.
arXiv Detail & Related papers (2023-04-27T13:52:01Z) - Boosting Point Clouds Rendering via Radiance Mapping [49.24193509772339]
We focus on boosting the image quality of point clouds rendering with a compact model design.
We simplify the NeRF representation to a spatial mapping function which only requires single evaluation per pixel.
Our method achieves the state-of-the-art rendering on point clouds, outperforming prior works by notable margins.
arXiv Detail & Related papers (2022-10-27T01:25:57Z) - Spotlights: Probing Shapes from Spherical Viewpoints [25.824284796437652]
We propose a novel sampling model called Spotlights to represent a 3D shape as a compact 1D array of depth values.
It simulates the configuration of cameras evenly distributed on a sphere, where each virtual camera casts light rays from its principal point through sample points on a small concentric spherical cap to probe for the possible intersections with the object surrounded by the sphere.
arXiv Detail & Related papers (2022-05-25T08:23:18Z) - Reconstructing Surfaces for Sparse Point Clouds with On-Surface Priors [52.25114448281418]
Current methods are able to reconstruct surfaces by learning Signed Distance Functions (SDFs) from single point clouds without ground truth signed distances or point normals.
We propose to reconstruct highly accurate surfaces from sparse point clouds with an on-surface prior.
Our method can learn SDFs from a single sparse point cloud without ground truth signed distances or point normals.
arXiv Detail & Related papers (2022-04-22T09:45:20Z) - Neural Point Light Fields [80.98651520818785]
We introduce Neural Point Light Fields that represent scenes implicitly with a light field living on a sparse point cloud.
These point light fields are as a function of the ray direction, and local point feature neighborhood, allowing us to interpolate the light field conditioned training images without dense object coverage and parallax.
arXiv Detail & Related papers (2021-12-02T18:20:10Z) - Learning Gradient Fields for Shape Generation [69.85355757242075]
A point cloud can be viewed as samples from a distribution of 3D points whose density is concentrated near the surface of the shape.
We generate point clouds by performing gradient ascent on an unnormalized probability density.
Our model directly predicts the gradient of the log density field and can be trained with a simple objective adapted from score-based generative models.
arXiv Detail & Related papers (2020-08-14T18:06:15Z) - Meshing Point Clouds with Predicted Intrinsic-Extrinsic Ratio Guidance [30.863194319818223]
We propose to leverage the input point cloud as much as possible, by only adding connectivity information to existing points.
Our key innovation is a surrogate of local connectivity, calculated by comparing the intrinsic/extrinsic metrics.
We demonstrate that our method can not only preserve details, handle ambiguous structures, but also possess strong generalizability to unseen categories.
arXiv Detail & Related papers (2020-07-17T22:36:00Z) - Pseudo-LiDAR Point Cloud Interpolation Based on 3D Motion Representation
and Spatial Supervision [68.35777836993212]
We propose a Pseudo-LiDAR point cloud network to generate temporally and spatially high-quality point cloud sequences.
By exploiting the scene flow between point clouds, the proposed network is able to learn a more accurate representation of the 3D spatial motion relationship.
arXiv Detail & Related papers (2020-06-20T03:11:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.