Neural Subdivision
- URL: http://arxiv.org/abs/2005.01819v1
- Date: Mon, 4 May 2020 20:03:21 GMT
- Title: Neural Subdivision
- Authors: Hsueh-Ti Derek Liu, Vladimir G. Kim, Siddhartha Chaudhuri, Noam
Aigerman, Alec Jacobson
- Abstract summary: This paper introduces Neural Subdivision, a novel framework for data-driven coarseto-fine geometry modeling.
We optimize for the same set of network weights across all local mesh patches, thus providing an architecture that is not constrained to a specific input mesh, fixed genus, or category.
We demonstrate that even when trained on a single high-resolution mesh our method generates reasonable subdivisions for novel shapes.
- Score: 58.97214948753937
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This paper introduces Neural Subdivision, a novel framework for data-driven
coarse-to-fine geometry modeling. During inference, our method takes a coarse
triangle mesh as input and recursively subdivides it to a finer geometry by
applying the fixed topological updates of Loop Subdivision, but predicting
vertex positions using a neural network conditioned on the local geometry of a
patch. This approach enables us to learn complex non-linear subdivision
schemes, beyond simple linear averaging used in classical techniques. One of
our key contributions is a novel self-supervised training setup that only
requires a set of high-resolution meshes for learning network weights. For any
training shape, we stochastically generate diverse low-resolution
discretizations of coarse counterparts, while maintaining a bijective mapping
that prescribes the exact target position of every new vertex during the
subdivision process. This leads to a very efficient and accurate loss function
for conditional mesh generation, and enables us to train a method that
generalizes across discretizations and favors preserving the manifold structure
of the output. During training we optimize for the same set of network weights
across all local mesh patches, thus providing an architecture that is not
constrained to a specific input mesh, fixed genus, or category. Our network
encodes patch geometry in a local frame in a rotation- and
translation-invariant manner. Jointly, these design choices enable our method
to generalize well, and we demonstrate that even when trained on a single
high-resolution mesh our method generates reasonable subdivisions for novel
shapes.
Related papers
- SpaceMesh: A Continuous Representation for Learning Manifold Surface Meshes [61.110517195874074]
We present a scheme to directly generate manifold, polygonal meshes of complex connectivity as the output of a neural network.
Our key innovation is to define a continuous latent connectivity space at each mesh, which implies the discrete mesh.
In applications, this approach not only yields high-quality outputs from generative models, but also enables directly learning challenging geometry processing tasks such as mesh repair.
arXiv Detail & Related papers (2024-09-30T17:59:03Z) - Learning Self-Prior for Mesh Inpainting Using Self-Supervised Graph Convolutional Networks [4.424836140281846]
We present a self-prior-based mesh inpainting framework that requires only an incomplete mesh as input.
Our method maintains the polygonal mesh format throughout the inpainting process.
We demonstrate that our method outperforms traditional dataset-independent approaches.
arXiv Detail & Related papers (2023-05-01T02:51:38Z) - Frame Averaging for Equivariant Shape Space Learning [85.42901997467754]
A natural way to incorporate symmetries in shape space learning is to ask that the mapping to the shape space (encoder) and mapping from the shape space (decoder) are equivariant to the relevant symmetries.
We present a framework for incorporating equivariance in encoders and decoders by introducing two contributions.
arXiv Detail & Related papers (2021-12-03T06:41:19Z) - Mesh Draping: Parametrization-Free Neural Mesh Transfer [92.55503085245304]
Mesh Draping is a neural method for transferring existing mesh structure from one shape to another.
We show that by leveraging gradually increasing frequencies to guide the neural optimization, we are able to achieve stable and high quality mesh transfer.
arXiv Detail & Related papers (2021-10-11T17:24:52Z) - Neural Marching Cubes [14.314650721573743]
We introduce Neural Marching Cubes (NMC), a data-driven approach for extracting a triangle mesh from a discretized implicit field.
We show that our network learns local features with limited fields, hence it generalizes well to new shapes and new datasets.
arXiv Detail & Related papers (2021-06-21T17:18:52Z) - Primal-Dual Mesh Convolutional Neural Networks [62.165239866312334]
We propose a primal-dual framework drawn from the graph-neural-network literature to triangle meshes.
Our method takes features for both edges and faces of a 3D mesh as input and dynamically aggregates them.
We provide theoretical insights of our approach using tools from the mesh-simplification literature.
arXiv Detail & Related papers (2020-10-23T14:49:02Z) - A Rotation-Invariant Framework for Deep Point Cloud Analysis [132.91915346157018]
We introduce a new low-level purely rotation-invariant representation to replace common 3D Cartesian coordinates as the network inputs.
Also, we present a network architecture to embed these representations into features, encoding local relations between points and their neighbors, and the global shape structure.
We evaluate our method on multiple point cloud analysis tasks, including shape classification, part segmentation, and shape retrieval.
arXiv Detail & Related papers (2020-03-16T14:04:45Z) - A deep learning approach for the computation of curvature in the
level-set method [0.0]
We propose a strategy to estimate the mean curvature of two-dimensional implicit in the level-set method.
Our approach is based on fitting feed-forward neural networks to synthetic data sets constructed from circular immersed in uniform grids of various resolutions.
arXiv Detail & Related papers (2020-02-04T00:49:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.