NeuManifold: Neural Watertight Manifold Reconstruction with Efficient
and High-Quality Rendering Support
- URL: http://arxiv.org/abs/2305.17134v2
- Date: Tue, 7 Nov 2023 00:47:35 GMT
- Title: NeuManifold: Neural Watertight Manifold Reconstruction with Efficient
and High-Quality Rendering Support
- Authors: Xinyue Wei, Fanbo Xiang, Sai Bi, Anpei Chen, Kalyan Sunkavalli,
Zexiang Xu, Hao Su
- Abstract summary: We present a method for generating high-quality watertight manifold meshes from multi-view input images.
Our method combines the benefits of both worlds; we take the geometry obtained from neural fields, and further optimize the geometry as well as a compact neural texture representation.
- Score: 45.68296352822415
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We present a method for generating high-quality watertight manifold meshes
from multi-view input images. Existing volumetric rendering methods are robust
in optimization but tend to generate noisy meshes with poor topology.
Differentiable rasterization-based methods can generate high-quality meshes but
are sensitive to initialization. Our method combines the benefits of both
worlds; we take the geometry initialization obtained from neural volumetric
fields, and further optimize the geometry as well as a compact neural texture
representation with differentiable rasterizers. Through extensive experiments,
we demonstrate that our method can generate accurate mesh reconstructions with
faithful appearance that are comparable to previous volume rendering methods
while being an order of magnitude faster in rendering. We also show that our
generated mesh and neural texture reconstruction is compatible with existing
graphics pipelines and enables downstream 3D applications such as simulation.
Project page: https://sarahweiii.github.io/neumanifold/
Related papers
- N-BVH: Neural ray queries with bounding volume hierarchies [51.430495562430565]
In 3D computer graphics, the bulk of a scene's memory usage is due to polygons and textures.
We devise N-BVH, a neural compression architecture designed to answer arbitrary ray queries in 3D.
Our method provides faithful approximations of visibility, depth, and appearance attributes.
arXiv Detail & Related papers (2024-05-25T13:54:34Z) - Delicate Textured Mesh Recovery from NeRF via Adaptive Surface
Refinement [78.48648360358193]
We present a novel framework that generates textured surface meshes from images.
Our approach begins by efficiently initializing the geometry and view-dependency appearance with a NeRF.
We jointly refine the appearance with geometry and bake it into texture images for real-time rendering.
arXiv Detail & Related papers (2023-03-03T17:14:44Z) - Multi-View Mesh Reconstruction with Neural Deferred Shading [0.8514420632209809]
State-of-the-art methods use both neural surface representations and neural shading.
We represent surfaces as triangle meshes and build a differentiable rendering pipeline around triangle rendering and neural shading.
We evaluate our runtime on a public 3D reconstruction dataset and show that it can match the reconstruction accuracy of traditional baselines while surpassing them in optimization.
arXiv Detail & Related papers (2022-12-08T16:29:46Z) - IRON: Inverse Rendering by Optimizing Neural SDFs and Materials from
Photometric Images [52.021529273866896]
We propose a neural inverse rendering pipeline called IRON that operates on photometric images and outputs high-quality 3D content.
Our method adopts neural representations for geometry as signed distance fields (SDFs) and materials during optimization to enjoy their flexibility and compactness.
We show that our IRON achieves significantly better inverse rendering quality compared to prior works.
arXiv Detail & Related papers (2022-04-05T14:14:18Z) - Extracting Triangular 3D Models, Materials, and Lighting From Images [59.33666140713829]
We present an efficient method for joint optimization of materials and lighting from multi-view image observations.
We leverage meshes with spatially-varying materials and environment that can be deployed in any traditional graphics engine.
arXiv Detail & Related papers (2021-11-24T13:58:20Z) - Point-Based Neural Rendering with Per-View Optimization [5.306819482496464]
We introduce a general approach that is with MVS, but allows further optimization of scene properties in the space of input views.
A key element of our approach is our new differentiable point-based pipeline.
We use these elements together in our neural splatting, that outperforms all previous methods both in quality and speed in almost all scenes we tested.
arXiv Detail & Related papers (2021-09-06T11:19:31Z) - Iso-Points: Optimizing Neural Implicit Surfaces with Hybrid
Representations [21.64457003420851]
We develop a hybrid neural surface representation that allows us to impose geometry-aware sampling and regularization.
We demonstrate that our method can be adopted to improve techniques for reconstructing neural implicit surfaces from multi-view images or point clouds.
arXiv Detail & Related papers (2020-12-11T15:51:04Z) - Learning Deformable Tetrahedral Meshes for 3D Reconstruction [78.0514377738632]
3D shape representations that accommodate learning-based 3D reconstruction are an open problem in machine learning and computer graphics.
Previous work on neural 3D reconstruction demonstrated benefits, but also limitations, of point cloud, voxel, surface mesh, and implicit function representations.
We introduce Deformable Tetrahedral Meshes (DefTet) as a particular parameterization that utilizes volumetric tetrahedral meshes for the reconstruction problem.
arXiv Detail & Related papers (2020-11-03T02:57:01Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.