Equivariant Wavelets: Fast Rotation and Translation Invariant Wavelet
Scattering Transforms
- URL: http://arxiv.org/abs/2104.11244v1
- Date: Thu, 22 Apr 2021 18:00:01 GMT
- Title: Equivariant Wavelets: Fast Rotation and Translation Invariant Wavelet
Scattering Transforms
- Authors: Andrew K. Saydjari, Douglas P. Finkbeiner
- Abstract summary: Imposing symmetry on image statistics can improve human interpretability, aid in generalization, and provide dimension reduction.
We introduce a fast-to-compute, translationally invariant and rotationally equivariant wavelet scattering network.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Wavelet scattering networks, which are convolutional neural networks (CNNs)
with fixed filters and weights, are promising tools for image analysis.
Imposing symmetry on image statistics can improve human interpretability, aid
in generalization, and provide dimension reduction. In this work, we introduce
a fast-to-compute, translationally invariant and rotationally equivariant
wavelet scattering network (EqWS) and filter bank of wavelets (triglets). We
demonstrate the interpretability and quantify the invariance/equivariance of
the coefficients, briefly commenting on difficulties with implementing scale
equivariance. On MNIST, we show that training on a rotationally invariant
reduction of the coefficients maintains rotational invariance when generalized
to test data and visualize residual symmetry breaking terms. Rotation
equivariance is leveraged to estimate the rotation angle of digits and
reconstruct the full rotation dependence of each coefficient from a single
angle. We benchmark EqWS with linear classifiers on EMNIST and CIFAR-10/100,
introducing a new second-order, cross-color channel coupling for the color
images. We conclude by comparing the performance of an isotropic reduction of
the scattering coefficients and RWST, a previous coefficient reduction, on an
isotropic classification of magnetohydrodynamic simulations with astrophysical
relevance.
Related papers
- A Probabilistic Approach to Learning the Degree of Equivariance in Steerable CNNs [5.141137421503899]
Steerable convolutional neural networks (SCNNs) enhance task performance by modelling geometric symmetries.
Yet, unknown or varying symmetries can lead to overconstrained weights and decreased performance.
This paper introduces a probabilistic method to learn the degree of equivariance in SCNNs.
arXiv Detail & Related papers (2024-06-06T10:45:19Z) - Variance-Reducing Couplings for Random Features [57.73648780299374]
Random features (RFs) are a popular technique to scale up kernel methods in machine learning.
We find couplings to improve RFs defined on both Euclidean and discrete input spaces.
We reach surprising conclusions about the benefits and limitations of variance reduction as a paradigm.
arXiv Detail & Related papers (2024-05-26T12:25:09Z) - Symmetry Breaking and Equivariant Neural Networks [17.740760773905986]
We introduce a novel notion of'relaxed equiinjection'
We show how to incorporate this relaxation into equivariant multilayer perceptronrons (E-MLPs)
The relevance of symmetry breaking is then discussed in various application domains.
arXiv Detail & Related papers (2023-12-14T15:06:48Z) - Weight fluctuations in (deep) linear neural networks and a derivation of the inverse-variance flatness relation [6.122833099916154]
We investigate the stationary (late-time) training regime of single- and two-layer underparameterized linear neural networks.
We identify the inter-layer coupling as a distinct source of anisotropy for the weight fluctuations.
We provide an analytical derivation of the recently observed inverse variance-flatness relation in a model of a deep linear neural network.
arXiv Detail & Related papers (2023-11-23T17:30:31Z) - On the Computation of the Gaussian Rate-Distortion-Perception Function [10.564071872770146]
We study the computation of the rate-distortion-perception function (RDPF) for a multivariate Gaussian source under mean squared error (MSE) distortion.
We provide the associated algorithmic realization, as well as the convergence and the rate of convergence characterization.
We corroborate our results with numerical simulations and draw connections to existing results.
arXiv Detail & Related papers (2023-11-15T18:34:03Z) - Non Commutative Convolutional Signal Models in Neural Networks:
Stability to Small Deformations [111.27636893711055]
We study the filtering and stability properties of non commutative convolutional filters.
Our results have direct implications for group neural networks, multigraph neural networks and quaternion neural networks.
arXiv Detail & Related papers (2023-10-05T20:27:22Z) - Entropy Transformer Networks: A Learning Approach via Tangent Bundle
Data Manifold [8.893886200299228]
This paper focuses on an accurate and fast approach for image transformation employed in the design of CNN architectures.
A novel Entropy STN (ESTN) is proposed that interpolates on the data manifold distributions.
Experiments on challenging benchmarks show that the proposed ESTN can improve predictive accuracy over a range of computer vision tasks.
arXiv Detail & Related papers (2023-07-24T04:21:51Z) - A Variational Perspective on Solving Inverse Problems with Diffusion
Models [101.831766524264]
Inverse tasks can be formulated as inferring a posterior distribution over data.
This is however challenging in diffusion models since the nonlinear and iterative nature of the diffusion process renders the posterior intractable.
We propose a variational approach that by design seeks to approximate the true posterior distribution.
arXiv Detail & Related papers (2023-05-07T23:00:47Z) - Deep Neural Networks with Efficient Guaranteed Invariances [77.99182201815763]
We address the problem of improving the performance and in particular the sample complexity of deep neural networks.
Group-equivariant convolutions are a popular approach to obtain equivariant representations.
We propose a multi-stream architecture, where each stream is invariant to a different transformation.
arXiv Detail & Related papers (2023-03-02T20:44:45Z) - Learning Discretized Neural Networks under Ricci Flow [51.36292559262042]
We study Discretized Neural Networks (DNNs) composed of low-precision weights and activations.
DNNs suffer from either infinite or zero gradients due to the non-differentiable discrete function during training.
arXiv Detail & Related papers (2023-02-07T10:51:53Z) - Generalizing Convolutional Neural Networks for Equivariance to Lie
Groups on Arbitrary Continuous Data [52.78581260260455]
We propose a general method to construct a convolutional layer that is equivariant to transformations from any specified Lie group.
We apply the same model architecture to images, ball-and-stick molecular data, and Hamiltonian dynamical systems.
arXiv Detail & Related papers (2020-02-25T17:40:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.