Equivalence Between SE(3) Equivariant Networks via Steerable Kernels and
Group Convolution
- URL: http://arxiv.org/abs/2211.15903v1
- Date: Tue, 29 Nov 2022 03:42:11 GMT
- Title: Equivalence Between SE(3) Equivariant Networks via Steerable Kernels and
Group Convolution
- Authors: Adrien Poulenard, Maks Ovsjanikov, Leonidas J. Guibas
- Abstract summary: A wide range of techniques have been proposed in recent years for designing neural networks for 3D data that are equivariant under rotation and translation of the input.
We provide an in-depth analysis of both methods and their equivalence and relate the two constructions to multiview convolutional networks.
We also derive new TFN non-linearities from our equivalence principle and test them on practical benchmark datasets.
- Score: 90.67482899242093
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: A wide range of techniques have been proposed in recent years for designing
neural networks for 3D data that are equivariant under rotation and translation
of the input. Most approaches for equivariance under the Euclidean group
$\mathrm{SE}(3)$ of rotations and translations fall within one of the two major
categories. The first category consists of methods that use
$\mathrm{SE}(3)$-convolution which generalizes classical
$\mathbb{R}^3$-convolution on signals over $\mathrm{SE}(3)$. Alternatively, it
is possible to use \textit{steerable convolution} which achieves
$\mathrm{SE}(3)$-equivariance by imposing constraints on
$\mathbb{R}^3$-convolution of tensor fields. It is known by specialists in the
field that the two approaches are equivalent, with steerable convolution being
the Fourier transform of $\mathrm{SE}(3)$ convolution. Unfortunately, these
results are not widely known and moreover the exact relations between deep
learning architectures built upon these two approaches have not been precisely
described in the literature on equivariant deep learning. In this work we
provide an in-depth analysis of both methods and their equivalence and relate
the two constructions to multiview convolutional networks. Furthermore, we
provide theoretical justifications of separability of $\mathrm{SE}(3)$ group
convolution, which explain the applicability and success of some recent
approaches. Finally, we express different methods using a single coherent
formalism and provide explicit formulas that relate the kernels learned by
different methods. In this way, our work helps to unify different
previously-proposed techniques for achieving roto-translational equivariance,
and helps to shed light on both the utility and precise differences between
various alternatives. We also derive new TFN non-linearities from our
equivalence principle and test them on practical benchmark datasets.
Related papers
- Affine Invariance in Continuous-Domain Convolutional Neural Networks [6.019182604573028]
This research studies affine invariance on continuous-domain convolutional neural networks.
We introduce a new criterion to assess the similarity of two input signals under affine transformations.
Our research could eventually extend the scope of geometrical transformations that practical deep-learning pipelines can handle.
arXiv Detail & Related papers (2023-11-13T14:17:57Z) - Fast, Expressive SE$(n)$ Equivariant Networks through Weight-Sharing in Position-Orientation Space [15.495593104596399]
We formalize the notion of weight sharing in convolutional networks as the sharing of message functions over point-pairs.
We develop an efficient equivariant group convolutional network for processing 3D point clouds.
arXiv Detail & Related papers (2023-10-04T17:06:32Z) - Rethinking SO(3)-equivariance with Bilinear Tensor Networks [0.0]
We show that by judicious symmetry breaking, we can efficiently increase the expressiveness of a network operating only on vector and order-2 tensor representations of SO$(2)$.
We demonstrate the method on an important problem from High Energy Physics known as textitb-tagging, where particle jets originating from b-meson decays must be discriminated from an overwhelming QCD background.
arXiv Detail & Related papers (2023-03-20T17:23:15Z) - Deep Learning Symmetries and Their Lie Groups, Algebras, and Subalgebras
from First Principles [55.41644538483948]
We design a deep-learning algorithm for the discovery and identification of the continuous group of symmetries present in a labeled dataset.
We use fully connected neural networks to model the transformations symmetry and the corresponding generators.
Our study also opens the door for using a machine learning approach in the mathematical study of Lie groups and their properties.
arXiv Detail & Related papers (2023-01-13T16:25:25Z) - Equivariance with Learned Canonicalization Functions [77.32483958400282]
We show that learning a small neural network to perform canonicalization is better than using predefineds.
Our experiments show that learning the canonicalization function is competitive with existing techniques for learning equivariant functions across many tasks.
arXiv Detail & Related papers (2022-11-11T21:58:15Z) - Frame Averaging for Invariant and Equivariant Network Design [50.87023773850824]
We introduce Frame Averaging (FA), a framework for adapting known (backbone) architectures to become invariant or equivariant to new symmetry types.
We show that FA-based models have maximal expressive power in a broad setting.
We propose a new class of universal Graph Neural Networks (GNNs), universal Euclidean motion invariant point cloud networks, and Euclidean motion invariant Message Passing (MP) GNNs.
arXiv Detail & Related papers (2021-10-07T11:05:23Z) - A Practical Method for Constructing Equivariant Multilayer Perceptrons
for Arbitrary Matrix Groups [115.58550697886987]
We provide a completely general algorithm for solving for the equivariant layers of matrix groups.
In addition to recovering solutions from other works as special cases, we construct multilayer perceptrons equivariant to multiple groups that have never been tackled before.
Our approach outperforms non-equivariant baselines, with applications to particle physics and dynamical systems.
arXiv Detail & Related papers (2021-04-19T17:21:54Z) - Rotation-Invariant Point Convolution With Multiple Equivariant
Alignments [1.0152838128195467]
We show that using rotation-equivariant alignments, it is possible to make any convolutional layer rotation-invariant.
With this core layer, we design rotation-invariant architectures which improve state-of-the-art results in both object classification and semantic segmentation.
arXiv Detail & Related papers (2020-12-07T20:47:46Z) - Stochastic Flows and Geometric Optimization on the Orthogonal Group [52.50121190744979]
We present a new class of geometrically-driven optimization algorithms on the orthogonal group $O(d)$.
We show that our methods can be applied in various fields of machine learning including deep, convolutional and recurrent neural networks, reinforcement learning, flows and metric learning.
arXiv Detail & Related papers (2020-03-30T15:37:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.