Neural Volumetric Mesh Generator
- URL: http://arxiv.org/abs/2210.03158v1
- Date: Thu, 6 Oct 2022 18:46:51 GMT
- Title: Neural Volumetric Mesh Generator
- Authors: Yan Zheng, Lemeng Wu, Xingchao Liu, Zhen Chen, Qiang Liu, Qixing Huang
- Abstract summary: We propose Neural Volumetric Mesh Generator(NVMG) which can generate novel and high-quality volumetric meshes.
Our pipeline can generate high-quality artifact-free volumetric and surface meshes from random noise or a reference image without any post-processing.
- Score: 40.224769507878904
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Deep generative models have shown success in generating 3D shapes with
different representations. In this work, we propose Neural Volumetric Mesh
Generator(NVMG) which can generate novel and high-quality volumetric meshes.
Unlike the previous 3D generative model for point cloud, voxel, and implicit
surface, the volumetric mesh representation is a ready-to-use representation in
industry with details on both the surface and interior. Generating this such
highly-structured data thus brings a significant challenge. We first propose a
diffusion-based generative model to tackle this problem by generating voxelized
shapes with close-to-reality outlines and structures. We can simply obtain a
tetrahedral mesh as a template with the voxelized shape. Further, we use a
voxel-conditional neural network to predict the smooth implicit surface
conditioned on the voxels, and progressively project the tetrahedral mesh to
the predicted surface under regularizations. The regularization terms are
carefully designed so that they can (1) get rid of the defects like flipping
and high distortion; (2) force the regularity of the interior and surface
structure during the deformation procedure for a high-quality final mesh. As
shown in the experiments, our pipeline can generate high-quality artifact-free
volumetric and surface meshes from random noise or a reference image without
any post-processing. Compared with the state-of-the-art voxel-to-mesh
deformation method, we show more robustness and better performance when taking
generated voxels as input.
Related papers
- NASM: Neural Anisotropic Surface Meshing [38.8654207201197]
This paper introduces a new learning-based method, NASM, for anisotropic surface meshing.
Key idea is to embed an input mesh into a high-d Euclidean embedding space to preserve curvature-based anisotropic metric.
Then, we propose a novel feature-sensitive remeshing on the generated high-d embedding to automatically capture sharp geometric features.
arXiv Detail & Related papers (2024-10-30T15:20:10Z) - Binary Opacity Grids: Capturing Fine Geometric Detail for Mesh-Based
View Synthesis [70.40950409274312]
We modify density fields to encourage them to converge towards surfaces, without compromising their ability to reconstruct thin structures.
We also develop a fusion-based meshing strategy followed by mesh simplification and appearance model fitting.
The compact meshes produced by our model can be rendered in real-time on mobile devices.
arXiv Detail & Related papers (2024-02-19T18:59:41Z) - PolyDiff: Generating 3D Polygonal Meshes with Diffusion Models [15.846449180313778]
PolyDiff is the first diffusion-based approach capable of directly generating realistic and diverse 3D polygonal meshes.
Our model is capable of producing high-quality 3D polygonal meshes, ready for integration into downstream 3D.
arXiv Detail & Related papers (2023-12-18T18:19:26Z) - MeshDiffusion: Score-based Generative 3D Mesh Modeling [68.40770889259143]
We consider the task of generating realistic 3D shapes for automatic scene generation and physical simulation.
We take advantage of the graph structure of meshes and use a simple yet very effective generative modeling method to generate 3D meshes.
Specifically, we represent meshes with deformable tetrahedral grids, and then train a diffusion model on this direct parametrization.
arXiv Detail & Related papers (2023-03-14T17:59:01Z) - Delicate Textured Mesh Recovery from NeRF via Adaptive Surface
Refinement [78.48648360358193]
We present a novel framework that generates textured surface meshes from images.
Our approach begins by efficiently initializing the geometry and view-dependency appearance with a NeRF.
We jointly refine the appearance with geometry and bake it into texture images for real-time rendering.
arXiv Detail & Related papers (2023-03-03T17:14:44Z) - Neural Wavelet-domain Diffusion for 3D Shape Generation, Inversion, and
Manipulation [54.09274684734721]
We present a new approach for 3D shape generation, inversion, and manipulation, through a direct generative modeling on a continuous implicit representation in wavelet domain.
Specifically, we propose a compact wavelet representation with a pair of coarse and detail coefficient volumes to implicitly represent 3D shapes via truncated signed distance functions and multi-scale biorthogonal wavelets.
We may jointly train an encoder network to learn a latent space for inverting shapes, allowing us to enable a rich variety of whole-shape and region-aware shape manipulations.
arXiv Detail & Related papers (2023-02-01T02:47:53Z) - Neural Wavelet-domain Diffusion for 3D Shape Generation [52.038346313823524]
This paper presents a new approach for 3D shape generation, enabling direct generative modeling on a continuous implicit representation in wavelet domain.
Specifically, we propose a compact wavelet representation with a pair of coarse and detail coefficient volumes to implicitly represent 3D shapes via truncated signed distance functions and multi-scale biorthogonal wavelets.
arXiv Detail & Related papers (2022-09-19T02:51:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.