DualConvMesh-Net: Joint Geodesic and Euclidean Convolutions on 3D Meshes
- URL: http://arxiv.org/abs/2004.01002v1
- Date: Thu, 2 Apr 2020 13:52:00 GMT
- Title: DualConvMesh-Net: Joint Geodesic and Euclidean Convolutions on 3D Meshes
- Authors: Jonas Schult, Francis Engelmann, Theodora Kontogianni, Bastian Leibe
- Abstract summary: We propose a family of deep hierarchical convolutional networks over 3D geometric data.
The first type, geodesic convolutions, defines the kernel weights over mesh surfaces or graphs.
The second type, Euclidean convolutions, is independent of any underlying mesh structure.
- Score: 28.571946680616765
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We propose DualConvMesh-Nets (DCM-Net) a family of deep hierarchical
convolutional networks over 3D geometric data that combines two types of
convolutions. The first type, geodesic convolutions, defines the kernel weights
over mesh surfaces or graphs. That is, the convolutional kernel weights are
mapped to the local surface of a given mesh. The second type, Euclidean
convolutions, is independent of any underlying mesh structure. The
convolutional kernel is applied on a neighborhood obtained from a local
affinity representation based on the Euclidean distance between 3D points.
Intuitively, geodesic convolutions can easily separate objects that are
spatially close but have disconnected surfaces, while Euclidean convolutions
can represent interactions between nearby objects better, as they are oblivious
to object surfaces. To realize a multi-resolution architecture, we borrow
well-established mesh simplification methods from the geometry processing
domain and adapt them to define mesh-preserving pooling and unpooling
operations. We experimentally show that combining both types of convolutions in
our architecture leads to significant performance gains for 3D semantic
segmentation, and we report competitive results on three scene segmentation
benchmarks. Our models and code are publicly available.
Related papers
- Flattening-Net: Deep Regular 2D Representation for 3D Point Cloud
Analysis [66.49788145564004]
We present an unsupervised deep neural architecture called Flattening-Net to represent irregular 3D point clouds of arbitrary geometry and topology.
Our methods perform favorably against the current state-of-the-art competitors.
arXiv Detail & Related papers (2022-12-17T15:05:25Z) - Neural Convolutional Surfaces [59.172308741945336]
This work is concerned with a representation of shapes that disentangles fine, local and possibly repeating geometry, from global, coarse structures.
We show that this approach achieves better neural shape compression than the state of the art, as well as enabling manipulation and transfer of shape details.
arXiv Detail & Related papers (2022-04-05T15:40:11Z) - Laplacian2Mesh: Laplacian-Based Mesh Understanding [4.808061174740482]
We introduce a novel and flexible convolutional neural network (CNN) model, called Laplacian2Mesh, for 3D triangle mesh.
Mesh pooling is applied to expand the receptive field of the network by the multi-space transformation of Laplacian.
Experiments on various learning tasks applied to 3D meshes demonstrate the effectiveness and efficiency of Laplacian2Mesh.
arXiv Detail & Related papers (2022-02-01T10:10:13Z) - Mesh Convolution with Continuous Filters for 3D Surface Parsing [101.25796935464648]
We propose a series of modular operations for effective geometric feature learning from 3D triangle meshes.
Our mesh convolutions exploit spherical harmonics as orthonormal bases to create continuous convolutional filters.
We further contribute a novel hierarchical neural network for perceptual parsing of 3D surfaces, named PicassoNet++.
arXiv Detail & Related papers (2021-12-03T09:16:49Z) - Deep Marching Tetrahedra: a Hybrid Representation for High-Resolution 3D
Shape Synthesis [90.26556260531707]
DMTet is a conditional generative model that can synthesize high-resolution 3D shapes using simple user guides such as coarse voxels.
Unlike deep 3D generative models that directly generate explicit representations such as meshes, our model can synthesize shapes with arbitrary topology.
arXiv Detail & Related papers (2021-11-08T05:29:35Z) - Subdivision-Based Mesh Convolution Networks [38.09613983540932]
Convolutional neural networks (CNNs) have made great breakthroughs in 2D computer vision.
This paper introduces a novel CNN framework, named SubdivNet, for 3D triangle meshes with Loop subdivision sequence connectivity.
Experiments on mesh classification, segmentation, correspondence, and retrieval from the real-world demonstrate the effectiveness and efficiency of SubdivNet.
arXiv Detail & Related papers (2021-06-04T06:50:34Z) - DualConv: Dual Mesh Convolutional Networks for Shape Correspondence [44.94765770516059]
Convolutional neural networks have been extremely successful for 2D images and are readily extended to handle 3D voxel data.
In this paper we explore how these networks can be extended to the dual face-based representation of triangular meshes.
Our experiments demonstrate that building additionally convolutional models that explicitly leverage the neighborhood size regularity of dual meshes enables learning shape representations that perform on par or better than previous approaches.
arXiv Detail & Related papers (2021-03-23T11:22:47Z) - Primal-Dual Mesh Convolutional Neural Networks [62.165239866312334]
We propose a primal-dual framework drawn from the graph-neural-network literature to triangle meshes.
Our method takes features for both edges and faces of a 3D mesh as input and dynamically aggregates them.
We provide theoretical insights of our approach using tools from the mesh-simplification literature.
arXiv Detail & Related papers (2020-10-23T14:49:02Z) - Learning Local Neighboring Structure for Robust 3D Shape Representation [143.15904669246697]
Representation learning for 3D meshes is important in many computer vision and graphics applications.
We propose a local structure-aware anisotropic convolutional operation (LSA-Conv)
Our model produces significant improvement in 3D shape reconstruction compared to state-of-the-art methods.
arXiv Detail & Related papers (2020-04-21T13:40:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.