Equivariant Maps for Hierarchical Structures
- URL: http://arxiv.org/abs/2006.03627v2
- Date: Tue, 24 Nov 2020 01:54:52 GMT
- Title: Equivariant Maps for Hierarchical Structures
- Authors: Renhao Wang, Marjan Albooyeh, Siamak Ravanbakhsh
- Abstract summary: We show that symmetry of a hierarchical structure is the "wreath product" of symmetries of the building blocks.
By voxelizing the point cloud, we impose a hierarchy of translation and permutation symmetries on the data.
We report state-of-the-art on Semantic3D, S3DIS, and vKITTI, that include some of the largest real-world point-cloud benchmarks.
- Score: 17.931059591895984
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: While using invariant and equivariant maps, it is possible to apply deep
learning to a range of primitive data structures, a formalism for dealing with
hierarchy is lacking. This is a significant issue because many practical
structures are hierarchies of simple building blocks; some examples include
sequences of sets, graphs of graphs, or multiresolution images. Observing that
the symmetry of a hierarchical structure is the "wreath product" of symmetries
of the building blocks, we express the equivariant map for the hierarchy using
an intuitive combination of the equivariant linear layers of the building
blocks. More generally, we show that any equivariant map for the hierarchy has
this form. To demonstrate the effectiveness of this approach to model design,
we consider its application in the semantic segmentation of point-cloud data.
By voxelizing the point cloud, we impose a hierarchy of translation and
permutation symmetries on the data and report state-of-the-art on Semantic3D,
S3DIS, and vKITTI, that include some of the largest real-world point-cloud
benchmarks.
Related papers
- StructRe: Rewriting for Structured Shape Modeling [63.792684115318906]
We present StructRe, a structure rewriting system, as a novel approach to structured shape modeling.
Given a 3D object represented by points and components, StructRe can rewrite it upward into more concise structures, or downward into more detailed structures.
arXiv Detail & Related papers (2023-11-29T10:35:00Z) - LISNeRF Mapping: LiDAR-based Implicit Mapping via Semantic Neural Fields for Large-Scale 3D Scenes [2.822816116516042]
Large-scale semantic mapping is crucial for outdoor autonomous agents to fulfill high-level tasks such as planning and navigation.
This paper proposes a novel method for large-scale 3D semantic reconstruction through implicit representations from posed LiDAR measurements alone.
arXiv Detail & Related papers (2023-11-04T03:55:38Z) - Counterfactual Explanations for Graph Classification Through the Lenses
of Density [19.53018353016675]
We define a general density-based counterfactual search framework to generate instance-level counterfactual explanations for graph classifiers.
We show two specific instantiations of this general framework: a method that searches for counterfactual graphs by opening or closing triangles, and a method driven by maximal cliques.
We evaluate the effectiveness of our approaches in 7 brain network datasets and compare the counterfactual statements generated according to several widely-used metrics.
arXiv Detail & Related papers (2023-07-27T13:28:18Z) - P-tensors: a General Formalism for Constructing Higher Order Message
Passing Networks [5.257115841810258]
We show that higher order graph neural networks can achieve better accuracy than their standard message passing counterparts.
We formalize these structures as permutation equivariant tensors, or P-tensors, and derive a basis for all linear maps between arbitrary order equivariant P-tensors.
arXiv Detail & Related papers (2023-06-19T08:21:30Z) - Hyperbolic Diffusion Embedding and Distance for Hierarchical
Representation Learning [13.976918651426205]
This paper presents a new method for hierarchical data embedding and distance.
Our method relies on combining diffusion geometry, a central approach to manifold learning, and hyperbolic geometry.
We show theoretically that our embedding and distance recover the underlying hierarchical structure.
arXiv Detail & Related papers (2023-05-30T11:49:39Z) - DepGraph: Towards Any Structural Pruning [68.40343338847664]
We study general structural pruning of arbitrary architecture like CNNs, RNNs, GNNs and Transformers.
We propose a general and fully automatic method, emphDependency Graph (DepGraph), to explicitly model the dependency between layers and comprehensively group parameters for pruning.
In this work, we extensively evaluate our method on several architectures and tasks, including ResNe(X)t, DenseNet, MobileNet and Vision transformer for images, GAT for graph, DGCNN for 3D point cloud, alongside LSTM for language, and demonstrate that, even with a
arXiv Detail & Related papers (2023-01-30T14:02:33Z) - GrannGAN: Graph annotation generative adversarial networks [72.66289932625742]
We consider the problem of modelling high-dimensional distributions and generating new examples of data with complex relational feature structure coherent with a graph skeleton.
The model we propose tackles the problem of generating the data features constrained by the specific graph structure of each data point by splitting the task into two phases.
In the first it models the distribution of features associated with the nodes of the given graph, in the second it complements the edge features conditionally on the node features.
arXiv Detail & Related papers (2022-12-01T11:49:07Z) - Frame Averaging for Equivariant Shape Space Learning [85.42901997467754]
A natural way to incorporate symmetries in shape space learning is to ask that the mapping to the shape space (encoder) and mapping from the shape space (decoder) are equivariant to the relevant symmetries.
We present a framework for incorporating equivariance in encoders and decoders by introducing two contributions.
arXiv Detail & Related papers (2021-12-03T06:41:19Z) - Unsupervised Scale-Invariant Multispectral Shape Matching [7.04719493717788]
Alignment between non-rigid stretchable structures is one of the hardest tasks in computer vision.
We present unsupervised neural network architecture based upon the spectrum of scale-invariant geometry.
Our method is agnostic to local-scale deformations and shows superior performance for matching shapes from different domains.
arXiv Detail & Related papers (2020-12-19T13:44:45Z) - Building powerful and equivariant graph neural networks with structural
message-passing [74.93169425144755]
We propose a powerful and equivariant message-passing framework based on two ideas.
First, we propagate a one-hot encoding of the nodes, in addition to the features, in order to learn a local context matrix around each node.
Second, we propose methods for the parametrization of the message and update functions that ensure permutation equivariance.
arXiv Detail & Related papers (2020-06-26T17:15:16Z) - On Learning Sets of Symmetric Elements [63.12061960528641]
This paper presents a principled approach to learning sets of general symmetric elements.
We first characterize the space of linear layers that are equivariant both to element reordering and to the inherent symmetries of elements.
We further show that networks that are composed of these layers, called Deep Sets for Symmetric Elements (DSS) layers, are universal approximators of both invariant and equivariant functions.
arXiv Detail & Related papers (2020-02-20T07:29:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.