TSDF-Sampling: Efficient Sampling for Neural Surface Field using
Truncated Signed Distance Field
- URL: http://arxiv.org/abs/2311.17878v1
- Date: Wed, 29 Nov 2023 18:23:18 GMT
- Title: TSDF-Sampling: Efficient Sampling for Neural Surface Field using
Truncated Signed Distance Field
- Authors: Chaerin Min, Sehyun Cha, Changhee Won, and Jongwoo Lim
- Abstract summary: This paper introduces a novel approach that substantially reduces the number of samplings by incorporating the Truncated Signed Distance Field (TSDF) of the scene.
Our empirical results show an 11-fold increase in inference speed without compromising performance.
- Score: 9.458310455872438
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Multi-view neural surface reconstruction has exhibited impressive results.
However, a notable limitation is the prohibitively slow inference time when
compared to traditional techniques, primarily attributed to the dense sampling,
required to maintain the rendering quality. This paper introduces a novel
approach that substantially reduces the number of samplings by incorporating
the Truncated Signed Distance Field (TSDF) of the scene. While prior works have
proposed importance sampling, their dependence on initial uniform samples over
the entire space makes them unable to avoid performance degradation when trying
to use less number of samples. In contrast, our method leverages the TSDF
volume generated only by the trained views, and it proves to provide a
reasonable bound on the sampling from upcoming novel views. As a result, we
achieve high rendering quality by fully exploiting the continuous neural SDF
estimation within the bounds given by the TSDF volume. Notably, our method is
the first approach that can be robustly plug-and-play into a diverse array of
neural surface field models, as long as they use the volume rendering
technique. Our empirical results show an 11-fold increase in inference speed
without compromising performance. The result videos are available at our
project page: https://tsdf-sampling.github.io/
Related papers
- On Optimal Sampling for Learning SDF Using MLPs Equipped with Positional
Encoding [79.67071790034609]
We devise a tool to determine the appropriate sampling rate for learning an accurate neural implicit field without undesirable side effects.
It is observed that a PE-equipped has an intrinsic frequency much higher than the highest frequency component in the PE layer.
We empirically show in the setting of SDF fitting, this recommended sampling rate is sufficient to secure accurate fitting results.
arXiv Detail & Related papers (2024-01-02T10:51:52Z) - ProNeRF: Learning Efficient Projection-Aware Ray Sampling for
Fine-Grained Implicit Neural Radiance Fields [27.008124938806944]
We propose ProNeRF, which provides an optimal trade-off between memory footprint (similar to NeRF), speed (faster than HyperReel), and quality (better than K-Planes)
Our ProNeRF yields state-of-the-art metrics, being 15-23x faster with 0.65dB higher PSNR than NeRF and yielding 0.95dB higher PSNR than the best published sampler-based method, HyperReel.
arXiv Detail & Related papers (2023-12-13T13:37:32Z) - HybridNeRF: Efficient Neural Rendering via Adaptive Volumetric Surfaces [71.1071688018433]
Neural radiance fields provide state-of-the-art view synthesis quality but tend to be slow to render.
We propose a method, HybridNeRF, that leverages the strengths of both representations by rendering most objects as surfaces.
We improve error rates by 15-30% while achieving real-time framerates (at least 36 FPS) for virtual-reality resolutions (2Kx2K)
arXiv Detail & Related papers (2023-12-05T22:04:49Z) - PDF: Point Diffusion Implicit Function for Large-scale Scene Neural
Representation [24.751481680565803]
We propose a Point implicit Function, PDF, for large-scale scene neural representation.
The core of our method is a large-scale point cloud super-resolution diffusion module.
The region sampling based on Mip-NeRF 360 is employed to model the background representation.
arXiv Detail & Related papers (2023-11-03T08:19:47Z) - RL-based Stateful Neural Adaptive Sampling and Denoising for Real-Time
Path Tracing [1.534667887016089]
MonteCarlo path tracing is a powerful technique for realistic image synthesis but suffers from high levels of noise at low sample counts.
We propose a framework with end-to-end training of a sampling importance network, a latent space encoder network, and a denoiser network.
arXiv Detail & Related papers (2023-10-05T12:39:27Z) - CLONeR: Camera-Lidar Fusion for Occupancy Grid-aided Neural
Representations [77.90883737693325]
This paper proposes CLONeR, which significantly improves upon NeRF by allowing it to model large outdoor driving scenes observed from sparse input sensor views.
This is achieved by decoupling occupancy and color learning within the NeRF framework into separate Multi-Layer Perceptrons (MLPs) trained using LiDAR and camera data, respectively.
In addition, this paper proposes a novel method to build differentiable 3D Occupancy Grid Maps (OGM) alongside the NeRF model, and leverage this occupancy grid for improved sampling of points along a ray for rendering in metric space.
arXiv Detail & Related papers (2022-09-02T17:44:50Z) - AdaNeRF: Adaptive Sampling for Real-time Rendering of Neural Radiance
Fields [8.214695794896127]
Novel view synthesis has recently been revolutionized by learning neural radiance fields directly from sparse observations.
rendering images with this new paradigm is slow due to the fact that an accurate quadrature of the volume rendering equation requires a large number of samples for each ray.
We propose a novel dual-network architecture that takes an direction by learning how to best reduce the number of required sample points.
arXiv Detail & Related papers (2022-07-21T05:59:13Z) - RegNeRF: Regularizing Neural Radiance Fields for View Synthesis from
Sparse Inputs [79.00855490550367]
We show that NeRF can produce photorealistic renderings of unseen viewpoints when many input views are available.
We address this by regularizing the geometry and appearance of patches rendered from unobserved viewpoints.
Our model outperforms not only other methods that optimize over a single scene, but also conditional models that are extensively pre-trained on large multi-view datasets.
arXiv Detail & Related papers (2021-12-01T18:59:46Z) - NeuSample: Neural Sample Field for Efficient View Synthesis [129.10351459066501]
We propose a lightweight module which names a neural sample field.
The proposed sample field maps rays into sample distributions, which can be transformed into point coordinates and fed into radiance fields for volume rendering.
We show that NeuSample achieves better rendering quality than NeRF while enjoying a faster inference speed.
arXiv Detail & Related papers (2021-11-30T16:43:49Z) - NeRF in detail: Learning to sample for view synthesis [104.75126790300735]
Neural radiance fields (NeRF) methods have demonstrated impressive novel view synthesis.
In this work we address a clear limitation of the vanilla coarse-to-fine approach -- that it is based on a performance and not trained end-to-end for the task at hand.
We introduce a differentiable module that learns to propose samples and their importance for the fine network, and consider and compare multiple alternatives for its neural architecture.
arXiv Detail & Related papers (2021-06-09T17:59:10Z) - Learning to Importance Sample in Primary Sample Space [22.98252856114423]
We propose a novel importance sampling technique that uses a neural network to learn how to sample from a desired density represented by a set of samples.
We show that our approach leads to effective variance reduction in several practical scenarios.
arXiv Detail & Related papers (2018-08-23T16:55:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.