MeshWalker: Deep Mesh Understanding by Random Walks
- URL: http://arxiv.org/abs/2006.05353v3
- Date: Thu, 10 Dec 2020 15:39:51 GMT
- Title: MeshWalker: Deep Mesh Understanding by Random Walks
- Authors: Alon Lahav, Ayellet Tal
- Abstract summary: We look at the most popular representation of 3D shapes in computer graphics - a triangular mesh - and ask how it can be utilized within deep learning.
This paper proposes a very different approach, termed MeshWalker, to learn the shape directly from a given mesh.
We show that our approach achieves state-of-the-art results for two fundamental shape analysis tasks.
- Score: 19.594977587417247
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Most attempts to represent 3D shapes for deep learning have focused on
volumetric grids, multi-view images and point clouds. In this paper we look at
the most popular representation of 3D shapes in computer graphics - a
triangular mesh - and ask how it can be utilized within deep learning. The few
attempts to answer this question propose to adapt convolutions & pooling to
suit Convolutional Neural Networks (CNNs). This paper proposes a very different
approach, termed MeshWalker, to learn the shape directly from a given mesh. The
key idea is to represent the mesh by random walks along the surface, which
"explore" the mesh's geometry and topology. Each walk is organized as a list of
vertices, which in some manner imposes regularity on the mesh. The walk is fed
into a Recurrent Neural Network (RNN) that "remembers" the history of the walk.
We show that our approach achieves state-of-the-art results for two fundamental
shape analysis tasks: shape classification and semantic segmentation.
Furthermore, even a very small number of examples suffices for learning. This
is highly important, since large datasets of meshes are difficult to acquire.
Related papers
- DMesh: A Differentiable Mesh Representation [40.800084296073415]
DMesh is a differentiable representation of general 3D triangular meshes.
We first get a set of convex tetrahedra that compactly tessellates the domain based on Weighted Delaunay Triangulation (WDT)
We formulate probability of faces to exist on the actual surface in a differentiable manner based on the WDT.
arXiv Detail & Related papers (2024-04-20T18:52:51Z) - E(3)-Equivariant Mesh Neural Networks [16.158762988735322]
Triangular meshes are widely used to represent three-dimensional objects.
Many recent works have address the need for geometric deep learning on 3D mesh.
We extend the equations of E(n)-Equivariant Graph Neural Networks (EGNNs) to incorporate mesh face information.
The resulting architecture, Equivariant Mesh Neural Network (EMNN), outperforms other, more complicated equivariant methods on mesh tasks.
arXiv Detail & Related papers (2024-02-07T13:21:41Z) - CircNet: Meshing 3D Point Clouds with Circumcenter Detection [67.23307214942696]
Reconstructing 3D point clouds into triangle meshes is a key problem in computational geometry and surface reconstruction.
We introduce a deep neural network that detects the circumcenters to achieve point cloud triangulation.
We validate our method on prominent datasets of both watertight and open surfaces.
arXiv Detail & Related papers (2023-01-23T03:32:57Z) - NeuralMeshing: Differentiable Meshing of Implicit Neural Representations [63.18340058854517]
We propose a novel differentiable meshing algorithm for extracting surface meshes from neural implicit representations.
Our method produces meshes with regular tessellation patterns and fewer triangle faces compared to existing methods.
arXiv Detail & Related papers (2022-10-05T16:52:25Z) - N-Cloth: Predicting 3D Cloth Deformation with Mesh-Based Networks [69.94313958962165]
We present a novel mesh-based learning approach (N-Cloth) for plausible 3D cloth deformation prediction.
We use graph convolution to transform the cloth and object meshes into a latent space to reduce the non-linearity in the mesh space.
Our approach can handle complex cloth meshes with up to $100$K triangles and scenes with various objects corresponding to SMPL humans, Non-SMPL humans, or rigid bodies.
arXiv Detail & Related papers (2021-12-13T03:13:11Z) - Mesh Convolution with Continuous Filters for 3D Surface Parsing [101.25796935464648]
We propose a series of modular operations for effective geometric feature learning from 3D triangle meshes.
Our mesh convolutions exploit spherical harmonics as orthonormal bases to create continuous convolutional filters.
We further contribute a novel hierarchical neural network for perceptual parsing of 3D surfaces, named PicassoNet++.
arXiv Detail & Related papers (2021-12-03T09:16:49Z) - CloudWalker: Random walks for 3D point cloud shape analysis [20.11028799145883]
We propose CloudWalker, a novel method for learning 3D shapes using random walks.
Our approach achieves state-of-the-art results for two 3D shape analysis tasks: classification and retrieval.
arXiv Detail & Related papers (2021-12-02T08:24:01Z) - LatticeNet: Fast Spatio-Temporal Point Cloud Segmentation Using
Permutohedral Lattices [27.048998326468688]
Deep convolutional neural networks (CNNs) have shown outstanding performance in the task of semantically segmenting images.
Here, we propose LatticeNet, a novel approach for 3D semantic segmentation, which takes raw point clouds as input.
We present results of 3D segmentation on multiple datasets where our method achieves state-of-the-art performance.
arXiv Detail & Related papers (2021-08-09T10:17:27Z) - AttWalk: Attentive Cross-Walks for Deep Mesh Analysis [19.12196187222047]
Mesh representation by random walks has been shown to benefit deep learning.
We propose a novel walk-attention mechanism that leverages the fact that multiple walks are used.
arXiv Detail & Related papers (2021-04-23T13:02:39Z) - Primal-Dual Mesh Convolutional Neural Networks [62.165239866312334]
We propose a primal-dual framework drawn from the graph-neural-network literature to triangle meshes.
Our method takes features for both edges and faces of a 3D mesh as input and dynamically aggregates them.
We provide theoretical insights of our approach using tools from the mesh-simplification literature.
arXiv Detail & Related papers (2020-10-23T14:49:02Z) - Deep Geometric Texture Synthesis [83.9404865744028]
We propose a novel framework for synthesizing geometric textures.
It learns texture statistics from local neighborhoods of a single reference 3D model.
Our network displaces mesh vertices in any direction, enabling synthesis of geometric textures.
arXiv Detail & Related papers (2020-06-30T19:36:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.