FastMESH: Fast Surface Reconstruction by Hexagonal Mesh-based Neural
Rendering
- URL: http://arxiv.org/abs/2305.17858v1
- Date: Mon, 29 May 2023 02:43:14 GMT
- Title: FastMESH: Fast Surface Reconstruction by Hexagonal Mesh-based Neural
Rendering
- Authors: Yisu Zhang, Jianke Zhu and Lixiang Lin
- Abstract summary: We propose an effective mesh-based neural rendering approach, named FastMESH, which only samples at the intersection of ray and mesh.
Experiments demonstrate that our approach achieves the state-of-the-art results on both reconstruction and novel view synthesis.
- Score: 8.264851594332677
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Despite the promising results of multi-view reconstruction, the recent neural
rendering-based methods, such as implicit surface rendering (IDR) and volume
rendering (NeuS), not only incur a heavy computational burden on training but
also have the difficulties in disentangling the geometric and appearance.
Although having achieved faster training speed than implicit representation and
hash coding, the explicit voxel-based method obtains the inferior results on
recovering surface. To address these challenges, we propose an effective
mesh-based neural rendering approach, named FastMESH, which only samples at the
intersection of ray and mesh. A coarse-to-fine scheme is introduced to
efficiently extract the initial mesh by space carving. More importantly, we
suggest a hexagonal mesh model to preserve surface regularity by constraining
the second-order derivatives of vertices, where only low level of positional
encoding is engaged for neural rendering. The experiments demonstrate that our
approach achieves the state-of-the-art results on both reconstruction and novel
view synthesis. Besides, we obtain 10-fold acceleration on training comparing
to the implicit representation-based methods.
Related papers
- $R^2$-Mesh: Reinforcement Learning Powered Mesh Reconstruction via Geometry and Appearance Refinement [5.810659946867557]
Mesh reconstruction based on Neural Radiance Fields (NeRF) is popular in a variety of applications such as computer graphics, virtual reality, and medical imaging.
We propose a novel algorithm that progressively generates and optimize meshes from multi-view images.
Our method delivers highly competitive and robust performance in both mesh rendering quality and geometric quality.
arXiv Detail & Related papers (2024-08-19T16:33:17Z) - PR-NeuS: A Prior-based Residual Learning Paradigm for Fast Multi-view
Neural Surface Reconstruction [45.34454245176438]
We propose a prior-based residual learning paradigm for fast multi-view neural surface reconstruction.
Our method only takes about 3 minutes to reconstruct the surface of a single scene, while achieving competitive surface quality.
arXiv Detail & Related papers (2023-12-18T09:24:44Z) - DNS SLAM: Dense Neural Semantic-Informed SLAM [92.39687553022605]
DNS SLAM is a novel neural RGB-D semantic SLAM approach featuring a hybrid representation.
Our method integrates multi-view geometry constraints with image-based feature extraction to improve appearance details.
Our experimental results achieve state-of-the-art performance on both synthetic data and real-world data tracking.
arXiv Detail & Related papers (2023-11-30T21:34:44Z) - Indoor Scene Reconstruction with Fine-Grained Details Using Hybrid Representation and Normal Prior Enhancement [50.56517624931987]
The reconstruction of indoor scenes from multi-view RGB images is challenging due to the coexistence of flat and texture-less regions.
Recent methods leverage neural radiance fields aided by predicted surface normal priors to recover the scene geometry.
This work aims to reconstruct high-fidelity surfaces with fine-grained details by addressing the above limitations.
arXiv Detail & Related papers (2023-09-14T12:05:29Z) - Neural Poisson Surface Reconstruction: Resolution-Agnostic Shape
Reconstruction from Point Clouds [53.02191521770926]
We introduce Neural Poisson Surface Reconstruction (nPSR), an architecture for shape reconstruction that addresses the challenge of recovering 3D shapes from points.
nPSR exhibits two main advantages: First, it enables efficient training on low-resolution data while achieving comparable performance at high-resolution evaluation.
Overall, the neural Poisson surface reconstruction not only improves upon the limitations of classical deep neural networks in shape reconstruction but also achieves superior results in terms of reconstruction quality, running time, and resolution agnosticism.
arXiv Detail & Related papers (2023-08-03T13:56:07Z) - Enhancing Surface Neural Implicits with Curvature-Guided Sampling and Uncertainty-Augmented Representations [37.42624848693373]
We introduce a method that directly digests depth images for the task of high-fidelity 3D reconstruction.
A simple sampling strategy is proposed to generate highly effective training data.
Despite its simplicity, our method outperforms a range of both classical and learning-based baselines.
arXiv Detail & Related papers (2023-06-03T12:23:17Z) - Delicate Textured Mesh Recovery from NeRF via Adaptive Surface
Refinement [78.48648360358193]
We present a novel framework that generates textured surface meshes from images.
Our approach begins by efficiently initializing the geometry and view-dependency appearance with a NeRF.
We jointly refine the appearance with geometry and bake it into texture images for real-time rendering.
arXiv Detail & Related papers (2023-03-03T17:14:44Z) - NeuS2: Fast Learning of Neural Implicit Surfaces for Multi-view
Reconstruction [95.37644907940857]
We propose a fast neural surface reconstruction approach, called NeuS2.
NeuS2 achieves two orders of magnitude improvement in terms of acceleration without compromising reconstruction quality.
We extend our method for fast training of dynamic scenes, with a proposed incremental training strategy and a novel global transformation prediction component.
arXiv Detail & Related papers (2022-12-10T07:19:43Z) - Neural Adaptive SCEne Tracing [24.781844909539686]
We present NAScenT, the first neural rendering method based on directly training a hybrid explicit-implicit neural representation.
NAScenT is capable of reconstructing challenging scenes including both large, sparsely populated volumes like UAV captured outdoor environments.
arXiv Detail & Related papers (2022-02-28T10:27:23Z) - NeuS: Learning Neural Implicit Surfaces by Volume Rendering for
Multi-view Reconstruction [88.02850205432763]
We present a novel neural surface reconstruction method, called NeuS, for reconstructing objects and scenes with high fidelity from 2D image inputs.
Existing neural surface reconstruction approaches, such as DVR and IDR, require foreground mask as supervision.
We observe that the conventional volume rendering method causes inherent geometric errors for surface reconstruction.
We propose a new formulation that is free of bias in the first order of approximation, thus leading to more accurate surface reconstruction even without the mask supervision.
arXiv Detail & Related papers (2021-06-20T12:59:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.