Similarity Equivariant Linear Transformation of Joint Orientation-Scale
Space Representations
- URL: http://arxiv.org/abs/2203.06786v2
- Date: Tue, 15 Mar 2022 04:48:54 GMT
- Title: Similarity Equivariant Linear Transformation of Joint Orientation-Scale
Space Representations
- Authors: Xinhua Zhang and Lance R. Williams
- Abstract summary: Group convolution generalizes the concept to linear operations.
Group convolution that is equivariant to similarity transformation is the most general shape preserving linear operator.
We present an initial demonstration of its utility by using it to compute a shape equivariant distribution of closed contours traced by particles undergoing Brownian motion in velocity.
- Score: 11.57423546614283
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Convolution is conventionally defined as a linear operation on functions of
one or more variables which commutes with shifts. Group convolution generalizes
the concept to linear operations on functions of group elements representing
more general geometric transformations and which commute with those
transformations. Since similarity transformation is the most general geometric
transformation on images that preserves shape, the group convolution that is
equivariant to similarity transformation is the most general shape preserving
linear operator. Because similarity transformations have four free parameters,
group convolutions are defined on four-dimensional, joint orientation-scale
spaces. Although prior work on equivariant linear operators has been limited to
discrete groups, the similarity group is continuous. In this paper, we describe
linear operators on discrete representations that are equivariant to continuous
similarity transformation. This is achieved by using a basis of functions that
is it joint shiftable-twistable-scalable. These pinwheel functions use Fourier
series in the orientation dimension and Laplace transform in the log-scale
dimension to form a basis of spatially localized functions that can be
continuously interpolated in position, orientation and scale. Although this
result is potentially significant with respect to visual computation generally,
we present an initial demonstration of its utility by using it to compute a
shape equivariant distribution of closed contours traced by particles
undergoing Brownian motion in velocity. The contours are constrained by sets of
points and line endings representing well known bistable illusory contour
inducing patterns.
Related papers
- The Hyperdimensional Transform: a Holographic Representation of
Functions [12.693238093510072]
We introduce the hyperdimensional transform as a new kind of integral transform.
It converts square-integrable functions into noise-robust, holographic, high-dimensional representations called hyperdimensional vectors.
It provides theoretical foundations and new insights for the field of hyperdimensional computing.
arXiv Detail & Related papers (2023-10-24T11:33:39Z) - Algebras of actions in an agent's representations of the world [51.06229789727133]
We use our framework to reproduce the symmetry-based representations from the symmetry-based disentangled representation learning formalism.
We then study the algebras of the transformations of worlds with features that occur in simple reinforcement learning scenarios.
Using computational methods, that we developed, we extract the algebras of the transformations of these worlds and classify them according to their properties.
arXiv Detail & Related papers (2023-10-02T18:24:51Z) - Fast computation of permutation equivariant layers with the partition
algebra [0.0]
Linear neural network layers that are either equivariant or invariant to permutations of their inputs form core building blocks of modern deep learning architectures.
Examples include the layers of DeepSets, as well as linear layers occurring in attention blocks of transformers and some graph neural networks.
arXiv Detail & Related papers (2023-03-10T21:13:12Z) - Unified Fourier-based Kernel and Nonlinearity Design for Equivariant
Networks on Homogeneous Spaces [52.424621227687894]
We introduce a unified framework for group equivariant networks on homogeneous spaces.
We take advantage of the sparsity of Fourier coefficients of the lifted feature fields.
We show that other methods treating features as the Fourier coefficients in the stabilizer subgroup are special cases of our activation.
arXiv Detail & Related papers (2022-06-16T17:59:01Z) - A variational approach for linearly dependent moving bases in quantum
dynamics: application to Gaussian functions [0.0]
We present a variational treatment of the linear dependence for a non-orthogonal time-dependent basis set in solving the Schr"odinger equation.
We show that the resulting dynamics converges to the exact one and is unitary by construction.
arXiv Detail & Related papers (2022-05-04T23:41:09Z) - 3D Equivariant Graph Implicit Functions [51.5559264447605]
We introduce a novel family of graph implicit functions with equivariant layers that facilitates modeling fine local details.
Our method improves over the existing rotation-equivariant implicit function from 0.69 to 0.89 on the ShapeNet reconstruction task.
arXiv Detail & Related papers (2022-03-31T16:51:25Z) - Capacity of Group-invariant Linear Readouts from Equivariant
Representations: How Many Objects can be Linearly Classified Under All
Possible Views? [21.06669693699965]
We find that the fraction of separable dichotomies is determined by the dimension of the space that is fixed by the group action.
We show how this relation extends to operations such as convolutions, element-wise nonlinearities, and global and local pooling.
arXiv Detail & Related papers (2021-10-14T15:46:53Z) - Topographic VAEs learn Equivariant Capsules [84.33745072274942]
We introduce the Topographic VAE: a novel method for efficiently training deep generative models with topographically organized latent variables.
We show that such a model indeed learns to organize its activations according to salient characteristics such as digit class, width, and style on MNIST.
We demonstrate approximate equivariance to complex transformations, expanding upon the capabilities of existing group equivariant neural networks.
arXiv Detail & Related papers (2021-09-03T09:25:57Z) - Disentangling images with Lie group transformations and sparse coding [3.3454373538792552]
We train a model that learns to disentangle spatial patterns and their continuous transformations in a completely unsupervised manner.
Training the model on a dataset consisting of controlled geometric transformations of specific MNIST digits shows that it can recover these transformations along with the digits.
arXiv Detail & Related papers (2020-12-11T19:11:32Z) - On Path Integration of Grid Cells: Group Representation and Isotropic
Scaling [135.0473739504851]
We conduct theoretical analysis of a general representation model of path integration by grid cells.
We learn hexagon grid patterns that share similar properties of the grid cells in the rodent brain.
The learned model is capable of accurate long distance path integration.
arXiv Detail & Related papers (2020-06-18T03:44:35Z) - The Convolution Exponential and Generalized Sylvester Flows [82.18442368078804]
This paper introduces a new method to build linear flows, by taking the exponential of a linear transformation.
An important insight is that the exponential can be computed implicitly, which allows the use of convolutional layers.
We show that the convolution exponential outperforms other linear transformations in generative flows on CIFAR10.
arXiv Detail & Related papers (2020-06-02T19:43:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.