Field Convolutions for Surface CNNs
- URL: http://arxiv.org/abs/2104.03916v1
- Date: Thu, 8 Apr 2021 17:11:14 GMT
- Title: Field Convolutions for Surface CNNs
- Authors: Thomas W. Mitchel, Vladimir G. Kim, Michael Kazhdan
- Abstract summary: We present a novel surface convolution operator acting on vector fields based on a simple observation.
This formulation combines intrinsic spatial convolution with parallel transport in a scattering operation.
We achieve state-of-the-art results on standard benchmarks in fundamental geometry processing tasks.
- Score: 19.897276088740995
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We present a novel surface convolution operator acting on vector fields that
is based on a simple observation: instead of combining neighboring features
with respect to a single coordinate parameterization defined at a given point,
we have every neighbor describe the position of the point within its own
coordinate frame. This formulation combines intrinsic spatial convolution with
parallel transport in a scattering operation while placing no constraints on
the filters themselves, providing a definition of convolution that commutes
with the action of isometries, has increased descriptive potential, and is
robust to noise and other nuisance factors. The result is a rich notion of
convolution which we call field convolution, well-suited for CNNs on surfaces.
Field convolutions are flexible and straight-forward to implement, and their
highly discriminating nature has cascading effects throughout the learning
pipeline. Using simple networks constructed from residual field convolution
blocks, we achieve state-of-the-art results on standard benchmarks in
fundamental geometry processing tasks, such as shape classification,
segmentation, correspondence, and sparse matching.
Related papers
- SpaceMesh: A Continuous Representation for Learning Manifold Surface Meshes [61.110517195874074]
We present a scheme to directly generate manifold, polygonal meshes of complex connectivity as the output of a neural network.
Our key innovation is to define a continuous latent connectivity space at each mesh, which implies the discrete mesh.
In applications, this approach not only yields high-quality outputs from generative models, but also enables directly learning challenging geometry processing tasks such as mesh repair.
arXiv Detail & Related papers (2024-09-30T17:59:03Z) - Binding in hippocampal-entorhinal circuits enables compositionality in cognitive maps [8.679251532993428]
We propose a normative model for spatial representation in the hippocampal formation.
We show that the model achieves normative desiderata including superlinear scaling of patterns with dimension.
More generally, the model formalizes how compositional computations could occur in the hippocampal formation.
arXiv Detail & Related papers (2024-06-27T00:53:53Z) - Algebraic Topological Networks via the Persistent Local Homology Sheaf [15.17547132363788]
We introduce a novel approach to enhance graph convolution and attention modules by incorporating local topological properties of the data.
We consider the framework of sheaf neural networks, which has been previously leveraged to incorporate additional structure into graph neural networks' features.
arXiv Detail & Related papers (2023-11-16T19:24:20Z) - Explicit Neural Surfaces: Learning Continuous Geometry With Deformation
Fields [33.38609930708073]
We introduce Explicit Neural Surfaces (ENS), an efficient smooth surface representation that encodes topology with a deformation field from a known base domain.
Compared to implicit surfaces, ENS trains faster and has several orders of magnitude faster inference times.
arXiv Detail & Related papers (2023-06-05T15:24:33Z) - Temporally-Consistent Surface Reconstruction using Metrically-Consistent
Atlases [131.50372468579067]
We propose a method for unsupervised reconstruction of a temporally-consistent sequence of surfaces from a sequence of time-evolving point clouds.
We represent the reconstructed surfaces as atlases computed by a neural network, which enables us to establish correspondences between frames.
Our approach outperforms state-of-the-art ones on several challenging datasets.
arXiv Detail & Related papers (2021-11-12T17:48:25Z) - VolterraNet: A higher order convolutional network with group
equivariance for homogeneous manifolds [19.39397826006002]
Convolutional neural networks have been highly successful in image-based learning tasks.
Recent work has generalized the traditional convolutional layer of a convolutional neural network to non-Euclidean spaces.
We present a novel higher order Volterra convolutional neural network (VolterraNet) for data defined as samples of functions.
arXiv Detail & Related papers (2021-06-05T19:28:16Z) - X-volution: On the unification of convolution and self-attention [52.80459687846842]
We propose a multi-branch elementary module composed of both convolution and self-attention operation.
The proposed X-volution achieves highly competitive visual understanding improvements.
arXiv Detail & Related papers (2021-06-04T04:32:02Z) - Towards Efficient Scene Understanding via Squeeze Reasoning [71.1139549949694]
We propose a novel framework called Squeeze Reasoning.
Instead of propagating information on the spatial map, we first learn to squeeze the input feature into a channel-wise global vector.
We show that our approach can be modularized as an end-to-end trained block and can be easily plugged into existing networks.
arXiv Detail & Related papers (2020-11-06T12:17:01Z) - CNNs on Surfaces using Rotation-Equivariant Features [10.259432250871997]
Transport of filter kernels on surfaces results in a rotational ambiguity, which prevents a uniform alignment of these kernels on the surface.
We propose a network architecture for surfaces that consists of vector-valued, rotation-equivariant features.
We evaluate the resulting networks on shape correspondence and shape classifications tasks and compare their performance to other approaches.
arXiv Detail & Related papers (2020-06-02T12:46:00Z) - Permutation Matters: Anisotropic Convolutional Layer for Learning on
Point Clouds [145.79324955896845]
We propose a permutable anisotropic convolutional operation (PAI-Conv) that calculates soft-permutation matrices for each point.
Experiments on point clouds demonstrate that PAI-Conv produces competitive results in classification and semantic segmentation tasks.
arXiv Detail & Related papers (2020-05-27T02:42:29Z) - Quaternion Equivariant Capsule Networks for 3D Point Clouds [58.566467950463306]
We present a 3D capsule module for processing point clouds that is equivariant to 3D rotations and translations.
We connect dynamic routing between capsules to the well-known Weiszfeld algorithm.
Based on our operator, we build a capsule network that disentangles geometry from pose.
arXiv Detail & Related papers (2019-12-27T13:51:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.