Equivariant Maps for Hierarchical Structures
- URL: http://arxiv.org/abs/2006.03627v2
- Date: Tue, 24 Nov 2020 01:54:52 GMT
- Title: Equivariant Maps for Hierarchical Structures
- Authors: Renhao Wang, Marjan Albooyeh, Siamak Ravanbakhsh
- Abstract summary: We show that symmetry of a hierarchical structure is the "wreath product" of symmetries of the building blocks.
By voxelizing the point cloud, we impose a hierarchy of translation and permutation symmetries on the data.
We report state-of-the-art on Semantic3D, S3DIS, and vKITTI, that include some of the largest real-world point-cloud benchmarks.
- Score: 17.931059591895984
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: While using invariant and equivariant maps, it is possible to apply deep
learning to a range of primitive data structures, a formalism for dealing with
hierarchy is lacking. This is a significant issue because many practical
structures are hierarchies of simple building blocks; some examples include
sequences of sets, graphs of graphs, or multiresolution images. Observing that
the symmetry of a hierarchical structure is the "wreath product" of symmetries
of the building blocks, we express the equivariant map for the hierarchy using
an intuitive combination of the equivariant linear layers of the building
blocks. More generally, we show that any equivariant map for the hierarchy has
this form. To demonstrate the effectiveness of this approach to model design,
we consider its application in the semantic segmentation of point-cloud data.
By voxelizing the point cloud, we impose a hierarchy of translation and
permutation symmetries on the data and report state-of-the-art on Semantic3D,
S3DIS, and vKITTI, that include some of the largest real-world point-cloud
benchmarks.
Related papers
- Self-Attention as a Parametric Endofunctor: A Categorical Framework for Transformer Architectures [0.0]
We develop a category-theoretic framework focusing on the linear components of self-attention.
We show that the query, key, and value maps naturally define a parametric 1-morphism in the 2-category $mathbfPara(Vect)$.
stacking multiple self-attention layers corresponds to constructing the free monad on this endofunctor.
arXiv Detail & Related papers (2025-01-06T11:14:18Z) - Learning Structured Representations with Hyperbolic Embeddings [22.95613852886361]
We propose HypStructure: a Hyperbolic Structured regularization approach to accurately embed the label hierarchy into the learned representations.
Experiments on several large-scale vision benchmarks demonstrate the efficacy of HypStructure in reducing distortion.
For a better understanding of structured representation, we perform eigenvalue analysis that links the representation geometry to improved Out-of-Distribution (OOD) detection performance.
arXiv Detail & Related papers (2024-12-02T00:56:44Z) - Open-Vocabulary Octree-Graph for 3D Scene Understanding [54.11828083068082]
Octree-Graph is a novel scene representation for open-vocabulary 3D scene understanding.
An adaptive-octree structure is developed that stores semantics and depicts the occupancy of an object adjustably according to its shape.
arXiv Detail & Related papers (2024-11-25T10:14:10Z) - StructRe: Rewriting for Structured Shape Modeling [63.792684115318906]
We present StructRe, a structure rewriting system, as a novel approach to structured shape modeling.
Given a 3D object represented by points and components, StructRe can rewrite it upward into more concise structures, or downward into more detailed structures.
arXiv Detail & Related papers (2023-11-29T10:35:00Z) - LISNeRF Mapping: LiDAR-based Implicit Mapping via Semantic Neural Fields for Large-Scale 3D Scenes [2.822816116516042]
Large-scale semantic mapping is crucial for outdoor autonomous agents to fulfill high-level tasks such as planning and navigation.
This paper proposes a novel method for large-scale 3D semantic reconstruction through implicit representations from posed LiDAR measurements alone.
arXiv Detail & Related papers (2023-11-04T03:55:38Z) - Counterfactual Explanations for Graph Classification Through the Lenses
of Density [19.53018353016675]
We define a general density-based counterfactual search framework to generate instance-level counterfactual explanations for graph classifiers.
We show two specific instantiations of this general framework: a method that searches for counterfactual graphs by opening or closing triangles, and a method driven by maximal cliques.
We evaluate the effectiveness of our approaches in 7 brain network datasets and compare the counterfactual statements generated according to several widely-used metrics.
arXiv Detail & Related papers (2023-07-27T13:28:18Z) - GrannGAN: Graph annotation generative adversarial networks [72.66289932625742]
We consider the problem of modelling high-dimensional distributions and generating new examples of data with complex relational feature structure coherent with a graph skeleton.
The model we propose tackles the problem of generating the data features constrained by the specific graph structure of each data point by splitting the task into two phases.
In the first it models the distribution of features associated with the nodes of the given graph, in the second it complements the edge features conditionally on the node features.
arXiv Detail & Related papers (2022-12-01T11:49:07Z) - Frame Averaging for Equivariant Shape Space Learning [85.42901997467754]
A natural way to incorporate symmetries in shape space learning is to ask that the mapping to the shape space (encoder) and mapping from the shape space (decoder) are equivariant to the relevant symmetries.
We present a framework for incorporating equivariance in encoders and decoders by introducing two contributions.
arXiv Detail & Related papers (2021-12-03T06:41:19Z) - Unsupervised Scale-Invariant Multispectral Shape Matching [7.04719493717788]
Alignment between non-rigid stretchable structures is one of the hardest tasks in computer vision.
We present unsupervised neural network architecture based upon the spectrum of scale-invariant geometry.
Our method is agnostic to local-scale deformations and shows superior performance for matching shapes from different domains.
arXiv Detail & Related papers (2020-12-19T13:44:45Z) - Building powerful and equivariant graph neural networks with structural
message-passing [74.93169425144755]
We propose a powerful and equivariant message-passing framework based on two ideas.
First, we propagate a one-hot encoding of the nodes, in addition to the features, in order to learn a local context matrix around each node.
Second, we propose methods for the parametrization of the message and update functions that ensure permutation equivariance.
arXiv Detail & Related papers (2020-06-26T17:15:16Z) - On Learning Sets of Symmetric Elements [63.12061960528641]
This paper presents a principled approach to learning sets of general symmetric elements.
We first characterize the space of linear layers that are equivariant both to element reordering and to the inherent symmetries of elements.
We further show that networks that are composed of these layers, called Deep Sets for Symmetric Elements (DSS) layers, are universal approximators of both invariant and equivariant functions.
arXiv Detail & Related papers (2020-02-20T07:29:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.