3D Reconstruction with Fast Dipole Sums
- URL: http://arxiv.org/abs/2405.16788v3
- Date: Wed, 29 May 2024 01:35:22 GMT
- Title: 3D Reconstruction with Fast Dipole Sums
- Authors: Hanyu Chen, Bailey Miller, Ioannis Gkioulekas,
- Abstract summary: We introduce a technique for the reconstruction of high-fidelity surfaces from multi-view images.
Our technique uses a new point-based representation, the dipole sum, which generalizes the winding number to allow for arbitrary per-point attributes in point clouds.
- Score: 12.865206085308728
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We introduce a technique for the reconstruction of high-fidelity surfaces from multi-view images. Our technique uses a new point-based representation, the dipole sum, which generalizes the winding number to allow for interpolation of arbitrary per-point attributes in point clouds with noisy or outlier points. Using dipole sums allows us to represent implicit geometry and radiance fields as per-point attributes of a point cloud, which we initialize directly from structure from motion. We additionally derive Barnes-Hut fast summation schemes for accelerated forward and reverse-mode dipole sum queries. These queries facilitate the use of ray tracing to efficiently and differentiably render images with our point-based representations, and thus update their point attributes to optimize scene geometry and appearance. We evaluate this inverse rendering framework against state-of-the-art alternatives, based on ray tracing of neural representations or rasterization of Gaussian point-based representations. Our technique significantly improves reconstruction quality at equal runtimes, while also supporting more general rendering techniques such as shadow rays for direct illumination. In the supplement, we provide interactive visualizations of our results.
Related papers
- INPC: Implicit Neural Point Clouds for Radiance Field Rendering [5.64500060725726]
We introduce a new approach for reconstruction and novel-view synthesis of real-world scenes.
We propose a hybrid scene representation, which implicitly encodes a point cloud in a continuous octree-based probability field and a multi-resolution hash grid.
Our method achieves fast inference at interactive frame rates, and can extract explicit point clouds to further enhance performance.
arXiv Detail & Related papers (2024-03-25T15:26:32Z) - Differentiable Point-based Inverse Rendering [9.88708409803907]
DPIR is an analysis-by-synthesis method that processes images captured under diverse illuminations to estimate shape and spatially-varying BRDF.
We devise a hybrid point-volumetric representation for geometry and a regularized basis-BRDF representation for reflectance.
Our evaluations demonstrate that DPIR outperforms prior works in terms of reconstruction accuracy, computational efficiency, and memory footprint.
arXiv Detail & Related papers (2023-12-05T04:13:31Z) - NeuS-PIR: Learning Relightable Neural Surface using Pre-Integrated Rendering [23.482941494283978]
This paper presents a method, namely NeuS-PIR, for recovering relightable neural surfaces from multi-view images or video.
Unlike methods based on NeRF and discrete meshes, our method utilizes implicit neural surface representation to reconstruct high-quality geometry.
Our method enables advanced applications such as relighting, which can be seamlessly integrated with modern graphics engines.
arXiv Detail & Related papers (2023-06-13T09:02:57Z) - Quadric Representations for LiDAR Odometry, Mapping and Localization [93.24140840537912]
Current LiDAR odometry, mapping and localization methods leverage point-wise representations of 3D scenes.
We propose a novel method of describing scenes using quadric surfaces, which are far more compact representations of 3D objects.
Our method maintains low latency and memory utility while achieving competitive, and even superior, accuracy.
arXiv Detail & Related papers (2023-04-27T13:52:01Z) - VolRecon: Volume Rendering of Signed Ray Distance Functions for
Generalizable Multi-View Reconstruction [64.09702079593372]
VolRecon is a novel generalizable implicit reconstruction method with Signed Ray Distance Function (SRDF)
On DTU dataset, VolRecon outperforms SparseNeuS by about 30% in sparse view reconstruction and achieves comparable accuracy as MVSNet in full view reconstruction.
arXiv Detail & Related papers (2022-12-15T18:59:54Z) - Multi-View Mesh Reconstruction with Neural Deferred Shading [0.8514420632209809]
State-of-the-art methods use both neural surface representations and neural shading.
We represent surfaces as triangle meshes and build a differentiable rendering pipeline around triangle rendering and neural shading.
We evaluate our runtime on a public 3D reconstruction dataset and show that it can match the reconstruction accuracy of traditional baselines while surpassing them in optimization.
arXiv Detail & Related papers (2022-12-08T16:29:46Z) - PVSeRF: Joint Pixel-, Voxel- and Surface-Aligned Radiance Field for
Single-Image Novel View Synthesis [52.546998369121354]
We present PVSeRF, a learning framework that reconstructs neural radiance fields from single-view RGB images.
We propose to incorporate explicit geometry reasoning and combine it with pixel-aligned features for radiance field prediction.
We show that the introduction of such geometry-aware features helps to achieve a better disentanglement between appearance and geometry.
arXiv Detail & Related papers (2022-02-10T07:39:47Z) - Extracting Triangular 3D Models, Materials, and Lighting From Images [59.33666140713829]
We present an efficient method for joint optimization of materials and lighting from multi-view image observations.
We leverage meshes with spatially-varying materials and environment that can be deployed in any traditional graphics engine.
arXiv Detail & Related papers (2021-11-24T13:58:20Z) - Shape As Points: A Differentiable Poisson Solver [118.12466580918172]
In this paper, we introduce a differentiable point-to-mesh layer using a differentiable formulation of Poisson Surface Reconstruction (PSR)
The differentiable PSR layer allows us to efficiently and differentiably bridge the explicit 3D point representation with the 3D mesh via the implicit indicator field.
Compared to neural implicit representations, our Shape-As-Points (SAP) model is more interpretable, lightweight, and accelerates inference time by one order of magnitude.
arXiv Detail & Related papers (2021-06-07T09:28:38Z) - MVSNeRF: Fast Generalizable Radiance Field Reconstruction from
Multi-View Stereo [52.329580781898116]
We present MVSNeRF, a novel neural rendering approach that can efficiently reconstruct neural radiance fields for view synthesis.
Unlike prior works on neural radiance fields that consider per-scene optimization on densely captured images, we propose a generic deep neural network that can reconstruct radiance fields from only three nearby input views via fast network inference.
arXiv Detail & Related papers (2021-03-29T13:15:23Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.