Mesh Draping: Parametrization-Free Neural Mesh Transfer
- URL: http://arxiv.org/abs/2110.05433v1
- Date: Mon, 11 Oct 2021 17:24:52 GMT
- Title: Mesh Draping: Parametrization-Free Neural Mesh Transfer
- Authors: Amir Hertz, Or Perel, Raja Giryes, Olga Sorkine-Hornung and Daniel
Cohen-Or
- Abstract summary: Mesh Draping is a neural method for transferring existing mesh structure from one shape to another.
We show that by leveraging gradually increasing frequencies to guide the neural optimization, we are able to achieve stable and high quality mesh transfer.
- Score: 92.55503085245304
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Despite recent advances in geometric modeling, 3D mesh modeling still
involves a considerable amount of manual labor by experts. In this paper, we
introduce Mesh Draping: a neural method for transferring existing mesh
structure from one shape to another. The method drapes the source mesh over the
target geometry and at the same time seeks to preserve the carefully designed
characteristics of the source mesh. At its core, our method deforms the source
mesh using progressive positional encoding. We show that by leveraging
gradually increasing frequencies to guide the neural optimization, we are able
to achieve stable and high quality mesh transfer. Our approach is simple and
requires little user guidance, compared to contemporary surface mapping
techniques which rely on parametrization or careful manual tuning. Most
importantly, Mesh Draping is a parameterization-free method, and thus
applicable to a variety of target shape representations, including point
clouds, polygon soups, and non-manifold meshes. We demonstrate that the
transferred meshing remains faithful to the source mesh design characteristics,
and at the same time fits the target geometry well.
Related papers
- SieveNet: Selecting Point-Based Features for Mesh Networks [41.74190660234404]
Meshes are widely used in 3D computer vision and graphics, but their irregular topology poses challenges in applying them to existing neural network architectures.
Recent advances in mesh neural networks turn to remeshing and push the boundary of pioneer methods that solely take the raw meshes as input.
We propose SieveNet, a novel paradigm that takes into account both the regular topology and the exact geometry.
arXiv Detail & Related papers (2023-08-24T03:40:16Z) - Learning Self-Prior for Mesh Inpainting Using Self-Supervised Graph Convolutional Networks [4.424836140281846]
We present a self-prior-based mesh inpainting framework that requires only an incomplete mesh as input.
Our method maintains the polygonal mesh format throughout the inpainting process.
We demonstrate that our method outperforms traditional dataset-independent approaches.
arXiv Detail & Related papers (2023-05-01T02:51:38Z) - NeuralMeshing: Differentiable Meshing of Implicit Neural Representations [63.18340058854517]
We propose a novel differentiable meshing algorithm for extracting surface meshes from neural implicit representations.
Our method produces meshes with regular tessellation patterns and fewer triangle faces compared to existing methods.
arXiv Detail & Related papers (2022-10-05T16:52:25Z) - 3D Pose Transfer with Correspondence Learning and Mesh Refinement [41.92922228475176]
3D pose transfer is one of the most challenging 3D generation tasks.
We propose a correspondence-refinement network to help the 3D pose transfer for both human and animal meshes.
arXiv Detail & Related papers (2021-09-30T11:49:03Z) - Progressive Encoding for Neural Optimization [92.55503085245304]
We show the competence of the PPE layer for mesh transfer and its advantages compared to contemporary surface mapping techniques.
Most importantly, our technique is a parameterization-free method, and thus applicable to a variety of target shape representations.
arXiv Detail & Related papers (2021-04-19T08:22:55Z) - Primal-Dual Mesh Convolutional Neural Networks [62.165239866312334]
We propose a primal-dual framework drawn from the graph-neural-network literature to triangle meshes.
Our method takes features for both edges and faces of a 3D mesh as input and dynamically aggregates them.
We provide theoretical insights of our approach using tools from the mesh-simplification literature.
arXiv Detail & Related papers (2020-10-23T14:49:02Z) - Neural Mesh Flow: 3D Manifold Mesh Generation via Diffeomorphic Flows [79.39092757515395]
We propose Neural Mesh Flow (NMF) to generate two-manifold meshes for genus-0 shapes.
NMF is a shape auto-encoder consisting of several Neural Ordinary Differential Equation (NODE) blocks that learn accurate mesh geometry by progressively deforming a spherical mesh.
Our experiments demonstrate that NMF facilitates several applications such as single-view mesh reconstruction, global shape parameterization, texture mapping, shape deformation and correspondence.
arXiv Detail & Related papers (2020-07-21T17:45:41Z) - Neural Subdivision [58.97214948753937]
This paper introduces Neural Subdivision, a novel framework for data-driven coarseto-fine geometry modeling.
We optimize for the same set of network weights across all local mesh patches, thus providing an architecture that is not constrained to a specific input mesh, fixed genus, or category.
We demonstrate that even when trained on a single high-resolution mesh our method generates reasonable subdivisions for novel shapes.
arXiv Detail & Related papers (2020-05-04T20:03:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.