Gauge Equivariant Mesh CNNs: Anisotropic convolutions on geometric
graphs
- URL: http://arxiv.org/abs/2003.05425v3
- Date: Fri, 19 Nov 2021 12:00:16 GMT
- Title: Gauge Equivariant Mesh CNNs: Anisotropic convolutions on geometric
graphs
- Authors: Pim de Haan, Maurice Weiler, Taco Cohen and Max Welling
- Abstract summary: A common approach to define convolutions on meshes is to interpret them as a graph and apply graph convolutional networks (GCNs)
We propose Gauge Equivariant Mesh CNNs which generalize GCNs to apply anisotropic gauge equivariant kernels.
Our experiments validate the significantly improved expressivity of the proposed model over conventional GCNs and other methods.
- Score: 81.12344211998635
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: A common approach to define convolutions on meshes is to interpret them as a
graph and apply graph convolutional networks (GCNs). Such GCNs utilize
isotropic kernels and are therefore insensitive to the relative orientation of
vertices and thus to the geometry of the mesh as a whole. We propose Gauge
Equivariant Mesh CNNs which generalize GCNs to apply anisotropic gauge
equivariant kernels. Since the resulting features carry orientation
information, we introduce a geometric message passing scheme defined by
parallel transporting features over mesh edges. Our experiments validate the
significantly improved expressivity of the proposed model over conventional
GCNs and other methods.
Related papers
- Scalable Graph Compressed Convolutions [68.85227170390864]
We propose a differentiable method that applies permutations to calibrate input graphs for Euclidean convolution.
Based on the graph calibration, we propose the Compressed Convolution Network (CoCN) for hierarchical graph representation learning.
arXiv Detail & Related papers (2024-07-26T03:14:13Z) - Geometric Generative Models based on Morphological Equivariant PDEs and GANs [3.6498648388765513]
We propose a geometric generative model based on an equivariant partial differential equation (PDE) for group convolution neural networks (G-CNNs)
The proposed geometric morphological GAN (GM-GAN) is obtained by using the proposed morphological equivariant convolutions in PDE-G-CNNs.
Preliminary results show that GM-GAN model outperforms classical GAN.
arXiv Detail & Related papers (2024-03-22T01:02:09Z) - Geometrical aspects of lattice gauge equivariant convolutional neural
networks [0.0]
Lattice gauge equivariant convolutional neural networks (L-CNNs) are a framework for convolutional neural networks that can be applied to non-Abelian lattice gauge theories.
arXiv Detail & Related papers (2023-03-20T20:49:08Z) - E(n)-equivariant Graph Neural Cellular Automata [4.168157981135698]
We propose a class of isotropic automata that we call E(n)-GNCAs.
These models are lightweight, but can nevertheless handle large graphs, capture complex dynamics and exhibit emergent self-organising behaviours.
We showcase the broad and successful applicability of E(n)-GNCAs on three different tasks.
arXiv Detail & Related papers (2023-01-25T10:17:07Z) - Geometric Scattering on Measure Spaces [12.0756034112778]
We introduce a general, unified model for geometric scattering on measure spaces.
We consider finite measure spaces that are obtained from randomly sampling an unknown manifold.
We propose two methods for constructing a data-driven graph on which the associated graph scattering transform approximates the scattering transform on the underlying manifold.
arXiv Detail & Related papers (2022-08-17T22:40:09Z) - ChebLieNet: Invariant Spectral Graph NNs Turned Equivariant by
Riemannian Geometry on Lie Groups [9.195729979000404]
ChebLieNet is a group-equivariant method on (anisotropic) manifold.
We develop a graph neural network made of anisotropic convolutional layers.
We empirically prove the existence of (data-dependent) sweet spots for anisotropic parameters on CIFAR10.
arXiv Detail & Related papers (2021-11-23T20:19:36Z) - Orthogonal Graph Neural Networks [53.466187667936026]
Graph neural networks (GNNs) have received tremendous attention due to their superiority in learning node representations.
stacking more convolutional layers significantly decreases the performance of GNNs.
We propose a novel Ortho-GConv, which could generally augment the existing GNN backbones to stabilize the model training and improve the model's generalization performance.
arXiv Detail & Related papers (2021-09-23T12:39:01Z) - Coordinate Independent Convolutional Networks -- Isometry and Gauge
Equivariant Convolutions on Riemannian Manifolds [70.32518963244466]
A major complication in comparison to flat spaces is that it is unclear in which alignment a convolution kernel should be applied on a manifold.
We argue that the particular choice of coordinatization should not affect a network's inference -- it should be coordinate independent.
A simultaneous demand for coordinate independence and weight sharing is shown to result in a requirement on the network to be equivariant.
arXiv Detail & Related papers (2021-06-10T19:54:19Z) - Self-Supervised Graph Representation Learning via Topology
Transformations [61.870882736758624]
We present the Topology Transformation Equivariant Representation learning, a general paradigm of self-supervised learning for node representations of graph data.
In experiments, we apply the proposed model to the downstream node and graph classification tasks, and results show that the proposed method outperforms the state-of-the-art unsupervised approaches.
arXiv Detail & Related papers (2021-05-25T06:11:03Z) - Primal-Dual Mesh Convolutional Neural Networks [62.165239866312334]
We propose a primal-dual framework drawn from the graph-neural-network literature to triangle meshes.
Our method takes features for both edges and faces of a 3D mesh as input and dynamically aggregates them.
We provide theoretical insights of our approach using tools from the mesh-simplification literature.
arXiv Detail & Related papers (2020-10-23T14:49:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.