M\"{o}bius Convolutions for Spherical CNNs
- URL: http://arxiv.org/abs/2201.12212v1
- Date: Fri, 28 Jan 2022 16:11:47 GMT
- Title: M\"{o}bius Convolutions for Spherical CNNs
- Authors: Thomas W. Mitchel, Noam Aigerman, Vladimir G. Kim, Michael Kazhdan
- Abstract summary: M"obius transformations play an important role in both geometry and spherical image processing.
We present a novel, M"obius-equivariant spherical convolution operator.
We demonstrate its utility by achieving promising results in both shape classification and image segmentation tasks.
- Score: 26.91151736538527
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: M\"{o}bius transformations play an important role in both geometry and
spherical image processing -- they are the group of conformal automorphisms of
2D surfaces and the spherical equivalent of homographies. Here we present a
novel, M\"{o}bius-equivariant spherical convolution operator which we call
M\"{o}bius convolution, and with it, develop the foundations for
M\"{o}bius-equivariant spherical CNNs. Our approach is based on a simple
observation: to achieve equivariance, we only need to consider the
lower-dimensional subgroup which transforms the positions of points as seen in
the frames of their neighbors. To efficiently compute M\"{o}bius convolutions
at scale we derive an approximation of the action of the transformations on
spherical filters, allowing us to compute our convolutions in the spectral
domain with the fast Spherical Harmonic Transform. The resulting framework is
both flexible and descriptive, and we demonstrate its utility by achieving
promising results in both shape classification and image segmentation tasks.
Related papers
- Relative Representations: Topological and Geometric Perspectives [53.88896255693922]
Relative representations are an established approach to zero-shot model stitching.
We introduce a normalization procedure in the relative transformation, resulting in invariance to non-isotropic rescalings and permutations.
Second, we propose to deploy topological densification when fine-tuning relative representations, a topological regularization loss encouraging clustering within classes.
arXiv Detail & Related papers (2024-09-17T08:09:22Z) - Expanding Expressivity in Transformer Models with MöbiusAttention [17.163751713885013]
M"obiusAttention integrates M"obius transformations within the attention mechanism of Transformer-based models.
By incorporating these properties, M"obiusAttention empowers models to learn more intricate geometric relationships between tokens.
arXiv Detail & Related papers (2024-09-08T16:56:33Z) - NeuroMorph: Unsupervised Shape Interpolation and Correspondence in One
Go [109.88509362837475]
We present NeuroMorph, a new neural network architecture that takes as input two 3D shapes.
NeuroMorph produces smooth and point-to-point correspondences between them.
It works well for a large variety of input shapes, including non-isometric pairs from different object categories.
arXiv Detail & Related papers (2021-06-17T12:25:44Z) - Coordinate Independent Convolutional Networks -- Isometry and Gauge
Equivariant Convolutions on Riemannian Manifolds [70.32518963244466]
A major complication in comparison to flat spaces is that it is unclear in which alignment a convolution kernel should be applied on a manifold.
We argue that the particular choice of coordinatization should not affect a network's inference -- it should be coordinate independent.
A simultaneous demand for coordinate independence and weight sharing is shown to result in a requirement on the network to be equivariant.
arXiv Detail & Related papers (2021-06-10T19:54:19Z) - Field Convolutions for Surface CNNs [19.897276088740995]
We present a novel surface convolution operator acting on vector fields based on a simple observation.
This formulation combines intrinsic spatial convolution with parallel transport in a scattering operation.
We achieve state-of-the-art results on standard benchmarks in fundamental geometry processing tasks.
arXiv Detail & Related papers (2021-04-08T17:11:14Z) - Convolutional Hough Matching Networks [39.524998833064956]
We introduce a Hough transform perspective on convolutional matching and propose an effective geometric matching algorithm, dubbed Convolutional Hough Matching (CHM)
We cast it into a trainable neural layer with a semi-isotropic high-dimensional kernel, which learns non-rigid matching with a small number of interpretable parameters.
Our method sets a new state of the art on standard benchmarks for semantic visual correspondence, proving its strong robustness to challenging intra-class variations.
arXiv Detail & Related papers (2021-03-31T06:17:03Z) - Spherical Transformer: Adapting Spherical Signal to CNNs [53.18482213611481]
Spherical Transformer can transform spherical signals into vectors that can be directly processed by standard CNNs.
We evaluate our approach on the tasks of spherical MNIST recognition, 3D object classification and omnidirectional image semantic segmentation.
arXiv Detail & Related papers (2021-01-11T12:33:16Z) - Learning Equivariant Representations [10.745691354609738]
Convolutional neural networks (CNNs) are successful examples of this principle.
We propose equivariant models for different transformations defined by groups of symmetries.
These models leverage symmetries in the data to reduce sample and model complexity and improve generalization performance.
arXiv Detail & Related papers (2020-12-04T18:46:17Z) - Spin-Weighted Spherical CNNs [58.013031812072356]
We present a new type of spherical CNN that allows anisotropic filters in an efficient way, without ever leaving the sphere domain.
The key idea is to consider spin-weighted spherical functions, which were introduced in physics in the study of gravitational waves.
Our method outperforms previous methods on tasks like classification of spherical images, classification of 3D shapes and semantic segmentation of spherical panoramas.
arXiv Detail & Related papers (2020-06-18T17:57:21Z) - Gauge Equivariant Mesh CNNs: Anisotropic convolutions on geometric
graphs [81.12344211998635]
A common approach to define convolutions on meshes is to interpret them as a graph and apply graph convolutional networks (GCNs)
We propose Gauge Equivariant Mesh CNNs which generalize GCNs to apply anisotropic gauge equivariant kernels.
Our experiments validate the significantly improved expressivity of the proposed model over conventional GCNs and other methods.
arXiv Detail & Related papers (2020-03-11T17:21:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.