PDF: Point Diffusion Implicit Function for Large-scale Scene Neural
Representation
- URL: http://arxiv.org/abs/2311.01773v1
- Date: Fri, 3 Nov 2023 08:19:47 GMT
- Title: PDF: Point Diffusion Implicit Function for Large-scale Scene Neural
Representation
- Authors: Yuhan Ding, Fukun Yin, Jiayuan Fan, Hui Li, Xin Chen, Wen Liu,
Chongshan Lu, Gang YU, Tao Chen
- Abstract summary: We propose a Point implicit Function, PDF, for large-scale scene neural representation.
The core of our method is a large-scale point cloud super-resolution diffusion module.
The region sampling based on Mip-NeRF 360 is employed to model the background representation.
- Score: 24.751481680565803
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recent advances in implicit neural representations have achieved impressive
results by sampling and fusing individual points along sampling rays in the
sampling space. However, due to the explosively growing sampling space, finely
representing and synthesizing detailed textures remains a challenge for
unbounded large-scale outdoor scenes. To alleviate the dilemma of using
individual points to perceive the entire colossal space, we explore learning
the surface distribution of the scene to provide structural priors and reduce
the samplable space and propose a Point Diffusion implicit Function, PDF, for
large-scale scene neural representation. The core of our method is a
large-scale point cloud super-resolution diffusion module that enhances the
sparse point cloud reconstructed from several training images into a dense
point cloud as an explicit prior. Then in the rendering stage, only sampling
points with prior points within the sampling radius are retained. That is, the
sampling space is reduced from the unbounded space to the scene surface.
Meanwhile, to fill in the background of the scene that cannot be provided by
point clouds, the region sampling based on Mip-NeRF 360 is employed to model
the background representation. Expensive experiments have demonstrated the
effectiveness of our method for large-scale scene novel view synthesis, which
outperforms relevant state-of-the-art baselines.
Related papers
- TSDF-Sampling: Efficient Sampling for Neural Surface Field using
Truncated Signed Distance Field [9.458310455872438]
This paper introduces a novel approach that substantially reduces the number of samplings by incorporating the Truncated Signed Distance Field (TSDF) of the scene.
Our empirical results show an 11-fold increase in inference speed without compromising performance.
arXiv Detail & Related papers (2023-11-29T18:23:18Z) - Adaptive Shells for Efficient Neural Radiance Field Rendering [92.18962730460842]
We propose a neural radiance formulation that smoothly transitions between- and surface-based rendering.
Our approach enables efficient rendering at very high fidelity.
We also demonstrate that the extracted envelope enables downstream applications such as animation and simulation.
arXiv Detail & Related papers (2023-11-16T18:58:55Z) - Pointersect: Neural Rendering with Cloud-Ray Intersection [30.485621062087585]
We propose a novel method that renders point clouds as if they are surfaces.
The proposed method is differentiable and requires no scene-specific optimization.
arXiv Detail & Related papers (2023-04-24T18:36:49Z) - CLONeR: Camera-Lidar Fusion for Occupancy Grid-aided Neural
Representations [77.90883737693325]
This paper proposes CLONeR, which significantly improves upon NeRF by allowing it to model large outdoor driving scenes observed from sparse input sensor views.
This is achieved by decoupling occupancy and color learning within the NeRF framework into separate Multi-Layer Perceptrons (MLPs) trained using LiDAR and camera data, respectively.
In addition, this paper proposes a novel method to build differentiable 3D Occupancy Grid Maps (OGM) alongside the NeRF model, and leverage this occupancy grid for improved sampling of points along a ray for rendering in metric space.
arXiv Detail & Related papers (2022-09-02T17:44:50Z) - Semi-signed neural fitting for surface reconstruction from unoriented
point clouds [53.379712818791894]
We propose SSN-Fitting to reconstruct a better signed distance field.
SSN-Fitting consists of a semi-signed supervision and a loss-based region sampling strategy.
We conduct experiments to demonstrate that SSN-Fitting achieves state-of-the-art performance under different settings.
arXiv Detail & Related papers (2022-06-14T09:40:17Z) - Spotlights: Probing Shapes from Spherical Viewpoints [25.824284796437652]
We propose a novel sampling model called Spotlights to represent a 3D shape as a compact 1D array of depth values.
It simulates the configuration of cameras evenly distributed on a sphere, where each virtual camera casts light rays from its principal point through sample points on a small concentric spherical cap to probe for the possible intersections with the object surrounded by the sphere.
arXiv Detail & Related papers (2022-05-25T08:23:18Z) - PUFA-GAN: A Frequency-Aware Generative Adversarial Network for 3D Point
Cloud Upsampling [56.463507980857216]
We propose a generative adversarial network for point cloud upsampling.
It can make the upsampled points evenly distributed on the underlying surface but also efficiently generate clean high frequency regions.
arXiv Detail & Related papers (2022-03-02T07:47:46Z) - Neural Point Light Fields [80.98651520818785]
We introduce Neural Point Light Fields that represent scenes implicitly with a light field living on a sparse point cloud.
These point light fields are as a function of the ray direction, and local point feature neighborhood, allowing us to interpolate the light field conditioned training images without dense object coverage and parallax.
arXiv Detail & Related papers (2021-12-02T18:20:10Z) - Point Cloud Upsampling via Disentangled Refinement [86.3641957163818]
Point clouds produced by 3D scanning are often sparse, non-uniform, and noisy.
Recent upsampling approaches aim to generate a dense point set, while achieving both distribution uniformity and proximity-to-surface.
We formulate two cascaded sub-networks, a dense generator and a spatial refiner.
arXiv Detail & Related papers (2021-06-09T02:58:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.