DualConv: Dual Mesh Convolutional Networks for Shape Correspondence
- URL: http://arxiv.org/abs/2103.12459v1
- Date: Tue, 23 Mar 2021 11:22:47 GMT
- Title: DualConv: Dual Mesh Convolutional Networks for Shape Correspondence
- Authors: Nitika Verma, Adnane Boukhayma, Jakob Verbeek, Edmond Boyer
- Abstract summary: Convolutional neural networks have been extremely successful for 2D images and are readily extended to handle 3D voxel data.
In this paper we explore how these networks can be extended to the dual face-based representation of triangular meshes.
Our experiments demonstrate that building additionally convolutional models that explicitly leverage the neighborhood size regularity of dual meshes enables learning shape representations that perform on par or better than previous approaches.
- Score: 44.94765770516059
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Convolutional neural networks have been extremely successful for 2D images
and are readily extended to handle 3D voxel data. Meshes are a more common 3D
shape representation that quantize the shape surface instead of the ambient
space as with voxels, hence giving access to surface properties such as normals
or appearances. The formulation of deep neural networks on meshes is, however,
more complex since they are irregular data structures where the number of
neighbors varies across vertices. While graph convolutional networks have
previously been proposed over mesh vertex data, in this paper we explore how
these networks can be extended to the dual face-based representation of
triangular meshes, where nodes represent triangular faces in place of vertices.
In comparison to the primal vertex mesh, its face dual offers several
advantages, including, importantly, that the dual mesh is regular in the sense
that each triangular face has exactly three neighbors. Moreover, the dual mesh
suggests the use of a number of input features that are naturally defined over
faces, such as surface normals and face areas. We evaluate the dual approach on
the shape correspondence task on the FAUST human shape dataset and other
versions of it with varying mesh topology. While applying generic graph
convolutions to the dual mesh shows already improvements over primal mesh
inputs, our experiments demonstrate that building additionally convolutional
models that explicitly leverage the neighborhood size regularity of dual meshes
enables learning shape representations that perform on par or better than
previous approaches in terms of correspondence accuracy and mean geodesic
error, while being more robust to topological changes in the meshes between
training and testing shapes.
Related papers
- GenUDC: High Quality 3D Mesh Generation with Unsigned Dual Contouring Representation [13.923644541595893]
3D generative models generate high-quality meshes with complex structures and realistic surfaces.
We propose the GenUDC framework to address these challenges by leveraging the Unsigned Dual Contouring (UDC) as the mesh representation.
In addition, GenUDC adopts a two-stage, coarse-to-fine generative process for 3D mesh generation.
arXiv Detail & Related papers (2024-10-23T11:59:49Z) - SpaceMesh: A Continuous Representation for Learning Manifold Surface Meshes [61.110517195874074]
We present a scheme to directly generate manifold, polygonal meshes of complex connectivity as the output of a neural network.
Our key innovation is to define a continuous latent connectivity space at each mesh, which implies the discrete mesh.
In applications, this approach not only yields high-quality outputs from generative models, but also enables directly learning challenging geometry processing tasks such as mesh repair.
arXiv Detail & Related papers (2024-09-30T17:59:03Z) - Flattening-Net: Deep Regular 2D Representation for 3D Point Cloud
Analysis [66.49788145564004]
We present an unsupervised deep neural architecture called Flattening-Net to represent irregular 3D point clouds of arbitrary geometry and topology.
Our methods perform favorably against the current state-of-the-art competitors.
arXiv Detail & Related papers (2022-12-17T15:05:25Z) - Laplacian2Mesh: Laplacian-Based Mesh Understanding [4.808061174740482]
We introduce a novel and flexible convolutional neural network (CNN) model, called Laplacian2Mesh, for 3D triangle mesh.
Mesh pooling is applied to expand the receptive field of the network by the multi-space transformation of Laplacian.
Experiments on various learning tasks applied to 3D meshes demonstrate the effectiveness and efficiency of Laplacian2Mesh.
arXiv Detail & Related papers (2022-02-01T10:10:13Z) - Deep Marching Tetrahedra: a Hybrid Representation for High-Resolution 3D
Shape Synthesis [90.26556260531707]
DMTet is a conditional generative model that can synthesize high-resolution 3D shapes using simple user guides such as coarse voxels.
Unlike deep 3D generative models that directly generate explicit representations such as meshes, our model can synthesize shapes with arbitrary topology.
arXiv Detail & Related papers (2021-11-08T05:29:35Z) - Subdivision-Based Mesh Convolution Networks [38.09613983540932]
Convolutional neural networks (CNNs) have made great breakthroughs in 2D computer vision.
This paper introduces a novel CNN framework, named SubdivNet, for 3D triangle meshes with Loop subdivision sequence connectivity.
Experiments on mesh classification, segmentation, correspondence, and retrieval from the real-world demonstrate the effectiveness and efficiency of SubdivNet.
arXiv Detail & Related papers (2021-06-04T06:50:34Z) - Primal-Dual Mesh Convolutional Neural Networks [62.165239866312334]
We propose a primal-dual framework drawn from the graph-neural-network literature to triangle meshes.
Our method takes features for both edges and faces of a 3D mesh as input and dynamically aggregates them.
We provide theoretical insights of our approach using tools from the mesh-simplification literature.
arXiv Detail & Related papers (2020-10-23T14:49:02Z) - DualConvMesh-Net: Joint Geodesic and Euclidean Convolutions on 3D Meshes [28.571946680616765]
We propose a family of deep hierarchical convolutional networks over 3D geometric data.
The first type, geodesic convolutions, defines the kernel weights over mesh surfaces or graphs.
The second type, Euclidean convolutions, is independent of any underlying mesh structure.
arXiv Detail & Related papers (2020-04-02T13:52:00Z) - PUGeo-Net: A Geometry-centric Network for 3D Point Cloud Upsampling [103.09504572409449]
We propose a novel deep neural network based method, called PUGeo-Net, to generate uniform dense point clouds.
Thanks to its geometry-centric nature, PUGeo-Net works well for both CAD models with sharp features and scanned models with rich geometric details.
arXiv Detail & Related papers (2020-02-24T14:13:29Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.