Neural Jacobian Fields: Learning Intrinsic Mappings of Arbitrary Meshes
- URL: http://arxiv.org/abs/2205.02904v1
- Date: Thu, 5 May 2022 19:51:13 GMT
- Title: Neural Jacobian Fields: Learning Intrinsic Mappings of Arbitrary Meshes
- Authors: Noam Aigerman, Kunal Gupta, Vladimir G. Kim, Siddhartha Chaudhuri, Jun
Saito, Thibault Groueix
- Abstract summary: This paper introduces a framework designed to accurately predict piecewise linear mappings of arbitrary meshes via a neural network.
The framework is based on reducing the neural aspect to a prediction of a matrix for a single point, conditioned on a global shape descriptor.
By operating in the intrinsic gradient domain of each individual mesh, it allows the framework to predict highly-accurate mappings.
- Score: 38.157373733083894
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: This paper introduces a framework designed to accurately predict piecewise
linear mappings of arbitrary meshes via a neural network, enabling training and
evaluating over heterogeneous collections of meshes that do not share a
triangulation, as well as producing highly detail-preserving maps whose
accuracy exceeds current state of the art. The framework is based on reducing
the neural aspect to a prediction of a matrix for a single given point,
conditioned on a global shape descriptor. The field of matrices is then
projected onto the tangent bundle of the given mesh, and used as candidate
jacobians for the predicted map. The map is computed by a standard Poisson
solve, implemented as a differentiable layer with cached pre-factorization for
efficient training. This construction is agnostic to the triangulation of the
input, thereby enabling applications on datasets with varying triangulations.
At the same time, by operating in the intrinsic gradient domain of each
individual mesh, it allows the framework to predict highly-accurate mappings.
We validate these properties by conducting experiments over a broad range of
scenarios, from semantic ones such as morphing, registration, and deformation
transfer, to optimization-based ones, such as emulating elastic deformations
and contact correction, as well as being the first work, to our knowledge, to
tackle the task of learning to compute UV parameterizations of arbitrary
meshes. The results exhibit the high accuracy of the method as well as its
versatility, as it is readily applied to the above scenarios without any
changes to the framework.
Related papers
- SpaceMesh: A Continuous Representation for Learning Manifold Surface Meshes [61.110517195874074]
We present a scheme to directly generate manifold, polygonal meshes of complex connectivity as the output of a neural network.
Our key innovation is to define a continuous latent connectivity space at each mesh, which implies the discrete mesh.
In applications, this approach not only yields high-quality outputs from generative models, but also enables directly learning challenging geometry processing tasks such as mesh repair.
arXiv Detail & Related papers (2024-09-30T17:59:03Z) - Diffeomorphic Mesh Deformation via Efficient Optimal Transport for Cortical Surface Reconstruction [40.73187749820041]
Mesh deformation plays a pivotal role in many 3D vision tasks including dynamic simulations, rendering, and reconstruction.
A prevalent approach in current deep learning is the set-based approach which measures the discrepancy between two surfaces by comparing two randomly sampled point-clouds from the two meshes with Chamfer pseudo-distance.
We propose a novel metric for learning mesh deformation, defined by sliced Wasserstein distance on meshes represented as probability measures that generalize the set-based approach.
arXiv Detail & Related papers (2023-05-27T19:10:19Z) - Learning Smooth Neural Functions via Lipschitz Regularization [92.42667575719048]
We introduce a novel regularization designed to encourage smooth latent spaces in neural fields.
Compared with prior Lipschitz regularized networks, ours is computationally fast and can be implemented in four lines of code.
arXiv Detail & Related papers (2022-02-16T21:24:54Z) - U-mesh: Human Correspondence Matching with Mesh Convolutional Networks [15.828285556159026]
We propose an elegant fusion of regression (bottom-up) and generative (top-down) methods to fit a parametric template model to raw scan meshes.
Our first major contribution is an intrinsic convolutional mesh U-net architecture that predicts pointwise correspondence to a template surface.
We evaluate the proposed method on the FAUST correspondence challenge where we achieve 20% (33%) improvement over state of the art methods for inter- (intra-) subject correspondence.
arXiv Detail & Related papers (2021-08-15T08:58:45Z) - Deep Magnification-Flexible Upsampling over 3D Point Clouds [103.09504572409449]
We propose a novel end-to-end learning-based framework to generate dense point clouds.
We first formulate the problem explicitly, which boils down to determining the weights and high-order approximation errors.
Then, we design a lightweight neural network to adaptively learn unified and sorted weights as well as the high-order refinements.
arXiv Detail & Related papers (2020-11-25T14:00:18Z) - Correspondence Learning via Linearly-invariant Embedding [40.07515336866026]
We show that learning the basis from data can both improve robustness and lead to better accuracy in challenging settings.
We demonstrate that our approach achieves state-of-the-art results in challenging non-rigid 3D point cloud correspondence applications.
arXiv Detail & Related papers (2020-10-25T15:31:53Z) - Primal-Dual Mesh Convolutional Neural Networks [62.165239866312334]
We propose a primal-dual framework drawn from the graph-neural-network literature to triangle meshes.
Our method takes features for both edges and faces of a 3D mesh as input and dynamically aggregates them.
We provide theoretical insights of our approach using tools from the mesh-simplification literature.
arXiv Detail & Related papers (2020-10-23T14:49:02Z) - Neural Subdivision [58.97214948753937]
This paper introduces Neural Subdivision, a novel framework for data-driven coarseto-fine geometry modeling.
We optimize for the same set of network weights across all local mesh patches, thus providing an architecture that is not constrained to a specific input mesh, fixed genus, or category.
We demonstrate that even when trained on a single high-resolution mesh our method generates reasonable subdivisions for novel shapes.
arXiv Detail & Related papers (2020-05-04T20:03:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.