Tensor Frames -- How To Make Any Message Passing Network Equivariant
- URL: http://arxiv.org/abs/2405.15389v2
- Date: Fri, 9 Aug 2024 14:40:27 GMT
- Title: Tensor Frames -- How To Make Any Message Passing Network Equivariant
- Authors: Peter Lippmann, Gerrit Gerhartz, Roman Remme, Fred A. Hamprecht,
- Abstract summary: We present a novel framework for building equivariant message passing architectures.
We produce state-of-the-art results on normal vector regression on point clouds.
- Score: 15.687514300950813
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In many applications of geometric deep learning, the choice of global coordinate frame is arbitrary, and predictions should be independent of the reference frame. In other words, the network should be equivariant with respect to rotations and reflections of the input, i.e., the transformations of O(d). We present a novel framework for building equivariant message passing architectures and modifying existing non-equivariant architectures to be equivariant. Our approach is based on local coordinate frames, between which geometric information is communicated consistently by including tensorial objects in the messages. Our framework can be applied to message passing on geometric data in arbitrary dimensional Euclidean space. While many other approaches for equivariant message passing require specialized building blocks, such as non-standard normalization layers or non-linearities, our approach can be adapted straightforwardly to any existing architecture without such modifications. We explicitly demonstrate the benefit of O(3)-equivariance for a popular point cloud architecture and produce state-of-the-art results on normal vector regression on point clouds.
Related papers
- Current Symmetry Group Equivariant Convolution Frameworks for Representation Learning [5.802794302956837]
Euclidean deep learning is often inadequate for addressing real-world signals where the representation space is irregular and curved with complex topologies.
We focus on the importance of symmetry group equivariant deep learning models and their realization of convolution-like operations on graphs, 3D shapes, and non-Euclidean spaces.
arXiv Detail & Related papers (2024-09-11T15:07:18Z) - Grounding Continuous Representations in Geometry: Equivariant Neural Fields [26.567143650213225]
We propose a novel CNF architecture which uses a geometry-informed cross-attention to condition the NeF on a geometric variable.
We show that this approach induces a steerability property by which both field and latent are grounded in geometry.
We validate these main properties in a range of tasks including classification, segmentation, forecasting, reconstruction and generative modelling.
arXiv Detail & Related papers (2024-06-09T12:16:30Z) - Metric Convolutions: A Unifying Theory to Adaptive Convolutions [3.481985817302898]
Metric convolutions replace standard convolutions in image processing and deep learning.
They require fewer parameters and provide better generalisation.
Our approach shows competitive performance in standard denoising and classification tasks.
arXiv Detail & Related papers (2024-06-08T08:41:12Z) - A Canonicalization Perspective on Invariant and Equivariant Learning [54.44572887716977]
We introduce a canonicalization perspective that provides an essential and complete view of the design of frames.
We show that there exists an inherent connection between frames and canonical forms.
We design novel frames for eigenvectors that are strictly superior to existing methods.
arXiv Detail & Related papers (2024-05-28T17:22:15Z) - A Simple Strategy to Provable Invariance via Orbit Mapping [14.127786615513978]
We propose a method to make network architectures provably invariant with respect to group actions.
In a nutshell, we intend to 'undo' any possible transformation before feeding the data into the actual network.
arXiv Detail & Related papers (2022-09-24T03:40:42Z) - SemAffiNet: Semantic-Affine Transformation for Point Cloud Segmentation [94.11915008006483]
We propose SemAffiNet for point cloud semantic segmentation.
We conduct extensive experiments on the ScanNetV2 and NYUv2 datasets.
arXiv Detail & Related papers (2022-05-26T17:00:23Z) - Equivariant Point Cloud Analysis via Learning Orientations for Message
Passing [17.049105822164865]
We propose a novel framework to achieve equivariance for point cloud analysis based on the message passing (graph neural network) scheme.
We find the equivariant property could be obtained by introducing an orientation for each point to decouple the relative position for each point from the global pose of the entire point cloud.
Before aggregating information from the neighbors of a point, the networks transforms the neighbors' coordinates based on the point's learned orientations.
arXiv Detail & Related papers (2022-03-28T04:10:13Z) - Frame Averaging for Equivariant Shape Space Learning [85.42901997467754]
A natural way to incorporate symmetries in shape space learning is to ask that the mapping to the shape space (encoder) and mapping from the shape space (decoder) are equivariant to the relevant symmetries.
We present a framework for incorporating equivariance in encoders and decoders by introducing two contributions.
arXiv Detail & Related papers (2021-12-03T06:41:19Z) - Frame Averaging for Invariant and Equivariant Network Design [50.87023773850824]
We introduce Frame Averaging (FA), a framework for adapting known (backbone) architectures to become invariant or equivariant to new symmetry types.
We show that FA-based models have maximal expressive power in a broad setting.
We propose a new class of universal Graph Neural Networks (GNNs), universal Euclidean motion invariant point cloud networks, and Euclidean motion invariant Message Passing (MP) GNNs.
arXiv Detail & Related papers (2021-10-07T11:05:23Z) - Training or Architecture? How to Incorporate Invariance in Neural
Networks [14.162739081163444]
We propose a method for provably invariant network architectures with respect to group actions.
In a nutshell, we intend to 'undo' any possible transformation before feeding the data into the actual network.
We analyze properties of such approaches, extend them to equivariant networks, and demonstrate their advantages in terms of robustness as well as computational efficiency in several numerical examples.
arXiv Detail & Related papers (2021-06-18T10:31:00Z) - Coordinate Independent Convolutional Networks -- Isometry and Gauge
Equivariant Convolutions on Riemannian Manifolds [70.32518963244466]
A major complication in comparison to flat spaces is that it is unclear in which alignment a convolution kernel should be applied on a manifold.
We argue that the particular choice of coordinatization should not affect a network's inference -- it should be coordinate independent.
A simultaneous demand for coordinate independence and weight sharing is shown to result in a requirement on the network to be equivariant.
arXiv Detail & Related papers (2021-06-10T19:54:19Z) - Dense Non-Rigid Structure from Motion: A Manifold Viewpoint [162.88686222340962]
Non-Rigid Structure-from-Motion (NRSfM) problem aims to recover 3D geometry of a deforming object from its 2D feature correspondences across multiple frames.
We show that our approach significantly improves accuracy, scalability, and robustness against noise.
arXiv Detail & Related papers (2020-06-15T09:15:54Z) - Neural Subdivision [58.97214948753937]
This paper introduces Neural Subdivision, a novel framework for data-driven coarseto-fine geometry modeling.
We optimize for the same set of network weights across all local mesh patches, thus providing an architecture that is not constrained to a specific input mesh, fixed genus, or category.
We demonstrate that even when trained on a single high-resolution mesh our method generates reasonable subdivisions for novel shapes.
arXiv Detail & Related papers (2020-05-04T20:03:21Z) - Quaternion Equivariant Capsule Networks for 3D Point Clouds [58.566467950463306]
We present a 3D capsule module for processing point clouds that is equivariant to 3D rotations and translations.
We connect dynamic routing between capsules to the well-known Weiszfeld algorithm.
Based on our operator, we build a capsule network that disentangles geometry from pose.
arXiv Detail & Related papers (2019-12-27T13:51:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.