Fully Convolutional Mesh Autoencoder using Efficient Spatially Varying
Kernels
- URL: http://arxiv.org/abs/2006.04325v2
- Date: Wed, 21 Oct 2020 06:16:23 GMT
- Title: Fully Convolutional Mesh Autoencoder using Efficient Spatially Varying
Kernels
- Authors: Yi Zhou, Chenglei Wu, Zimo Li, Chen Cao, Yuting Ye, Jason Saragih, Hao
Li, Yaser Sheikh
- Abstract summary: We propose a non-template-specific fully convolutional mesh autoencoder for arbitrary registered mesh data.
Our model outperforms state-of-the-art methods on reconstruction accuracy.
- Score: 41.81187438494441
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Learning latent representations of registered meshes is useful for many 3D
tasks. Techniques have recently shifted to neural mesh autoencoders. Although
they demonstrate higher precision than traditional methods, they remain unable
to capture fine-grained deformations. Furthermore, these methods can only be
applied to a template-specific surface mesh, and is not applicable to more
general meshes, like tetrahedrons and non-manifold meshes. While more general
graph convolution methods can be employed, they lack performance in
reconstruction precision and require higher memory usage. In this paper, we
propose a non-template-specific fully convolutional mesh autoencoder for
arbitrary registered mesh data. It is enabled by our novel convolution and
(un)pooling operators learned with globally shared weights and locally varying
coefficients which can efficiently capture the spatially varying contents
presented by irregular mesh connections. Our model outperforms state-of-the-art
methods on reconstruction accuracy. In addition, the latent codes of our
network are fully localized thanks to the fully convolutional structure, and
thus have much higher interpolation capability than many traditional 3D mesh
generation models.
Related papers
- SpaceMesh: A Continuous Representation for Learning Manifold Surface Meshes [61.110517195874074]
We present a scheme to directly generate manifold, polygonal meshes of complex connectivity as the output of a neural network.
Our key innovation is to define a continuous latent connectivity space at each mesh, which implies the discrete mesh.
In applications, this approach not only yields high-quality outputs from generative models, but also enables directly learning challenging geometry processing tasks such as mesh repair.
arXiv Detail & Related papers (2024-09-30T17:59:03Z) - MeshXL: Neural Coordinate Field for Generative 3D Foundation Models [51.1972329762843]
We present a family of generative pre-trained auto-regressive models, which addresses the process of 3D mesh generation with modern large language model approaches.
MeshXL is able to generate high-quality 3D meshes, and can also serve as foundation models for various down-stream applications.
arXiv Detail & Related papers (2024-05-31T14:35:35Z) - PivotMesh: Generic 3D Mesh Generation via Pivot Vertices Guidance [66.40153183581894]
We introduce a generic and scalable mesh generation framework PivotMesh.
PivotMesh makes an initial attempt to extend the native mesh generation to large-scale datasets.
We show that PivotMesh can generate compact and sharp 3D meshes across various categories.
arXiv Detail & Related papers (2024-05-27T07:13:13Z) - GetMesh: A Controllable Model for High-quality Mesh Generation and Manipulation [25.42531640985281]
Mesh is a fundamental representation of 3D assets in various industrial applications, and is widely supported by professional softwares.
We propose a highly controllable generative model, GetMesh, for mesh generation and manipulation across different categories.
arXiv Detail & Related papers (2024-03-18T17:25:36Z) - Mesh Convolutional Autoencoder for Semi-Regular Meshes of Different
Sizes [0.0]
State-of-the-art mesh convolutional autoencoders require a fixed connectivity of all input meshes handled by the autoencoder.
We transform the discretization of the surfaces to semi-regular meshes that have a locally regular connectivity and whose meshing is hierarchical.
We apply the same mesh autoencoder to different datasets and our reconstruction error is more than 50% lower than the error from state-of-the-art models.
arXiv Detail & Related papers (2021-10-18T15:30:40Z) - Mesh Draping: Parametrization-Free Neural Mesh Transfer [92.55503085245304]
Mesh Draping is a neural method for transferring existing mesh structure from one shape to another.
We show that by leveraging gradually increasing frequencies to guide the neural optimization, we are able to achieve stable and high quality mesh transfer.
arXiv Detail & Related papers (2021-10-11T17:24:52Z) - Neural Subdivision [58.97214948753937]
This paper introduces Neural Subdivision, a novel framework for data-driven coarseto-fine geometry modeling.
We optimize for the same set of network weights across all local mesh patches, thus providing an architecture that is not constrained to a specific input mesh, fixed genus, or category.
We demonstrate that even when trained on a single high-resolution mesh our method generates reasonable subdivisions for novel shapes.
arXiv Detail & Related papers (2020-05-04T20:03:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.