VolterraNet: A higher order convolutional network with group
equivariance for homogeneous manifolds
- URL: http://arxiv.org/abs/2106.15301v1
- Date: Sat, 5 Jun 2021 19:28:16 GMT
- Title: VolterraNet: A higher order convolutional network with group
equivariance for homogeneous manifolds
- Authors: Monami Banerjee, Rudrasis Chakraborty, Jose Bouza and Baba C. Vemuri
- Abstract summary: Convolutional neural networks have been highly successful in image-based learning tasks.
Recent work has generalized the traditional convolutional layer of a convolutional neural network to non-Euclidean spaces.
We present a novel higher order Volterra convolutional neural network (VolterraNet) for data defined as samples of functions.
- Score: 19.39397826006002
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Convolutional neural networks have been highly successful in image-based
learning tasks due to their translation equivariance property. Recent work has
generalized the traditional convolutional layer of a convolutional neural
network to non-Euclidean spaces and shown group equivariance of the generalized
convolution operation. In this paper, we present a novel higher order Volterra
convolutional neural network (VolterraNet) for data defined as samples of
functions on Riemannian homogeneous spaces. Analagous to the result for
traditional convolutions, we prove that the Volterra functional convolutions
are equivariant to the action of the isometry group admitted by the Riemannian
homogeneous spaces, and under some restrictions, any non-linear equivariant
function can be expressed as our homogeneous space Volterra convolution,
generalizing the non-linear shift equivariant characterization of Volterra
expansions in Euclidean space. We also prove that second order functional
convolution operations can be represented as cascaded convolutions which leads
to an efficient implementation. Beyond this, we also propose a dilated
VolterraNet model. These advances lead to large parameter reductions relative
to baseline non-Euclidean CNNs. To demonstrate the efficacy of the VolterraNet
performance, we present several real data experiments involving classification
tasks on spherical-MNIST, atomic energy, Shrec17 data sets, and group testing
on diffusion MRI data. Performance comparisons to the state-of-the-art are also
presented.
Related papers
- Higher Order Gauge Equivariant CNNs on Riemannian Manifolds and
Applications [7.322121417864824]
We introduce a higher order generalization of the gauge equivariant convolution, dubbed a gauge equivariant Volterra network (GEVNet)
This allows us to model spatially extended nonlinear interactions within a given field while still maintaining equivariance to global isometries.
In the neuroimaging data experiments, the resulting two-part architecture is used to automatically discriminate between patients with Lewy Body Disease (DLB), Alzheimer's Disease (AD) and Parkinson's Disease (PD) from diffusion magnetic resonance images (dMRI)
arXiv Detail & Related papers (2023-05-26T06:02:31Z) - VTAE: Variational Transformer Autoencoder with Manifolds Learning [144.0546653941249]
Deep generative models have demonstrated successful applications in learning non-linear data distributions through a number of latent variables.
The nonlinearity of the generator implies that the latent space shows an unsatisfactory projection of the data space, which results in poor representation learning.
We show that geodesics and accurate computation can substantially improve the performance of deep generative models.
arXiv Detail & Related papers (2023-04-03T13:13:19Z) - The Manifold Scattering Transform for High-Dimensional Point Cloud Data [16.500568323161563]
We present practical schemes for implementing the manifold scattering transform to datasets arising in naturalistic systems.
We show that our methods are effective for signal classification and manifold classification tasks.
arXiv Detail & Related papers (2022-06-21T02:15:00Z) - Unified Fourier-based Kernel and Nonlinearity Design for Equivariant
Networks on Homogeneous Spaces [52.424621227687894]
We introduce a unified framework for group equivariant networks on homogeneous spaces.
We take advantage of the sparsity of Fourier coefficients of the lifted feature fields.
We show that other methods treating features as the Fourier coefficients in the stabilizer subgroup are special cases of our activation.
arXiv Detail & Related papers (2022-06-16T17:59:01Z) - ChebLieNet: Invariant Spectral Graph NNs Turned Equivariant by
Riemannian Geometry on Lie Groups [9.195729979000404]
ChebLieNet is a group-equivariant method on (anisotropic) manifold.
We develop a graph neural network made of anisotropic convolutional layers.
We empirically prove the existence of (data-dependent) sweet spots for anisotropic parameters on CIFAR10.
arXiv Detail & Related papers (2021-11-23T20:19:36Z) - Topographic VAEs learn Equivariant Capsules [84.33745072274942]
We introduce the Topographic VAE: a novel method for efficiently training deep generative models with topographically organized latent variables.
We show that such a model indeed learns to organize its activations according to salient characteristics such as digit class, width, and style on MNIST.
We demonstrate approximate equivariance to complex transformations, expanding upon the capabilities of existing group equivariant neural networks.
arXiv Detail & Related papers (2021-09-03T09:25:57Z) - Group Equivariant Subsampling [60.53371517247382]
Subsampling is used in convolutional neural networks (CNNs) in the form of pooling or strided convolutions.
We first introduce translation equivariant subsampling/upsampling layers that can be used to construct exact translation equivariant CNNs.
We then generalise these layers beyond translations to general groups, thus proposing group equivariant subsampling/upsampling.
arXiv Detail & Related papers (2021-06-10T16:14:00Z) - Equivariant Spherical Deconvolution: Learning Sparse Orientation
Distribution Functions from Spherical Data [0.0]
We present a rotation-equivariant unsupervised learning framework for the sparse deconvolution of non-negative scalar fields defined on the unit sphere.
We show improvements in terms of tractography and partial volume estimation on a multi-shell dataset of human subjects.
arXiv Detail & Related papers (2021-02-17T16:04:35Z) - LieTransformer: Equivariant self-attention for Lie Groups [49.9625160479096]
Group equivariant neural networks are used as building blocks of group invariant neural networks.
We extend the scope of the literature to self-attention, that is emerging as a prominent building block of deep learning models.
We propose the LieTransformer, an architecture composed of LieSelfAttention layers that are equivariant to arbitrary Lie groups and their discrete subgroups.
arXiv Detail & Related papers (2020-12-20T11:02:49Z) - Generalizing Convolutional Neural Networks for Equivariance to Lie
Groups on Arbitrary Continuous Data [52.78581260260455]
We propose a general method to construct a convolutional layer that is equivariant to transformations from any specified Lie group.
We apply the same model architecture to images, ball-and-stick molecular data, and Hamiltonian dynamical systems.
arXiv Detail & Related papers (2020-02-25T17:40:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.