BundleRecon: Ray Bundle-Based 3D Neural Reconstruction
- URL: http://arxiv.org/abs/2305.07342v1
- Date: Fri, 12 May 2023 09:39:08 GMT
- Title: BundleRecon: Ray Bundle-Based 3D Neural Reconstruction
- Authors: Weikun Zhang, Jianke Zhu
- Abstract summary: We propose an enhanced model called BundleRecon for neural implicit multi-view reconstruction.
In the existing approaches, sampling is performed by a single ray that corresponds to a single pixel.
In contrast, our model samples a patch of pixels using a bundle of rays, which incorporates information from neighboring pixels.
- Score: 9.478278728273336
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: With the growing popularity of neural rendering, there has been an increasing
number of neural implicit multi-view reconstruction methods. While many models
have been enhanced in terms of positional encoding, sampling, rendering, and
other aspects to improve the reconstruction quality, current methods do not
fully leverage the information among neighboring pixels during the
reconstruction process. To address this issue, we propose an enhanced model
called BundleRecon. In the existing approaches, sampling is performed by a
single ray that corresponds to a single pixel. In contrast, our model samples a
patch of pixels using a bundle of rays, which incorporates information from
neighboring pixels. Furthermore, we design bundle-based constraints to further
improve the reconstruction quality. Experimental results demonstrate that
BundleRecon is compatible with the existing neural implicit multi-view
reconstruction methods and can improve their reconstruction quality.
Related papers
- MaRINeR: Enhancing Novel Views by Matching Rendered Images with Nearby References [49.71130133080821]
MaRINeR is a refinement method that leverages information of a nearby mapping image to improve the rendering of a target viewpoint.
We show improved renderings in quantitative metrics and qualitative examples from both explicit and implicit scene representations.
arXiv Detail & Related papers (2024-07-18T17:50:03Z) - GTR: Improving Large 3D Reconstruction Models through Geometry and Texture Refinement [51.97726804507328]
We propose a novel approach for 3D mesh reconstruction from multi-view images.
Our method takes inspiration from large reconstruction models that use a transformer-based triplane generator and a Neural Radiance Field (NeRF) model trained on multi-view images.
arXiv Detail & Related papers (2024-06-09T05:19:24Z) - ReconFusion: 3D Reconstruction with Diffusion Priors [104.73604630145847]
We present ReconFusion to reconstruct real-world scenes using only a few photos.
Our approach leverages a diffusion prior for novel view synthesis, trained on synthetic and multiview datasets.
Our method synthesizes realistic geometry and texture in underconstrained regions while preserving the appearance of observed regions.
arXiv Detail & Related papers (2023-12-05T18:59:58Z) - NeuManifold: Neural Watertight Manifold Reconstruction with Efficient
and High-Quality Rendering Support [45.68296352822415]
We present a method for generating high-quality watertight manifold meshes from multi-view input images.
Our method combines the benefits of both worlds; we take the geometry obtained from neural fields, and further optimize the geometry as well as a compact neural texture representation.
arXiv Detail & Related papers (2023-05-26T17:59:21Z) - VolRecon: Volume Rendering of Signed Ray Distance Functions for
Generalizable Multi-View Reconstruction [64.09702079593372]
VolRecon is a novel generalizable implicit reconstruction method with Signed Ray Distance Function (SRDF)
On DTU dataset, VolRecon outperforms SparseNeuS by about 30% in sparse view reconstruction and achieves comparable accuracy as MVSNet in full view reconstruction.
arXiv Detail & Related papers (2022-12-15T18:59:54Z) - Neural 3D Reconstruction in the Wild [86.6264706256377]
We introduce a new method that enables efficient and accurate surface reconstruction from Internet photo collections.
We present a new benchmark and protocol for evaluating reconstruction performance on such in-the-wild scenes.
arXiv Detail & Related papers (2022-05-25T17:59:53Z) - BNV-Fusion: Dense 3D Reconstruction using Bi-level Neural Volume Fusion [85.24673400250671]
We present Bi-level Neural Volume Fusion (BNV-Fusion), which leverages recent advances in neural implicit representations and neural rendering for dense 3D reconstruction.
In order to incrementally integrate new depth maps into a global neural implicit representation, we propose a novel bi-level fusion strategy.
We evaluate the proposed method on multiple datasets quantitatively and qualitatively, demonstrating a significant improvement over existing methods.
arXiv Detail & Related papers (2022-04-03T19:33:09Z) - PERF: Performant, Explicit Radiance Fields [1.933681537640272]
We present a novel way of approaching image-based 3D reconstruction based on radiance fields.
The problem of volumetric reconstruction is formulated as a non-linear least-squares problem and solved explicitly without the use of neural networks.
arXiv Detail & Related papers (2021-12-10T15:29:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.