Roto-Translation Equivariant Super-Resolution of Two-Dimensional Flows
Using Convolutional Neural Networks
- URL: http://arxiv.org/abs/2202.11099v1
- Date: Tue, 22 Feb 2022 07:07:07 GMT
- Title: Roto-Translation Equivariant Super-Resolution of Two-Dimensional Flows
Using Convolutional Neural Networks
- Authors: Yuki Yasuda
- Abstract summary: Convolutional neural networks (CNNs) often process vectors as quantities having no direction like colors in images.
This study investigates the effect of treating vectors as geometrical objects in terms of super-resolution of velocity on two-dimensional fluids.
- Score: 0.15229257192293202
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Convolutional neural networks (CNNs) often process vectors as quantities
having no direction like colors in images. This study investigates the effect
of treating vectors as geometrical objects in terms of super-resolution of
velocity on two-dimensional fluids. Vector is distinguished from scalar by the
transformation law associated with a change in basis, which can be incorporated
as the prior knowledge using the equivariant deep learning. We convert existing
CNNs into equivariant ones by making each layer equivariant with respect to
rotation and translation. The training data in the low- and high-resolution are
generated with the downsampling or the spectral nudging. When the data inherit
the rotational symmetry, the equivariant CNNs show comparable accuracy with the
non-equivariant ones. Since the number of parameters is smaller in the
equivariant CNNs, these models are trainable with a smaller size of the data.
In this case, the transformation law of vector should be incorporated as the
prior knowledge, where vector is explicitly treated as a quantity having
direction. Two examples demonstrate that the symmetry of the data can be
broken. In the first case, a downsampling method makes the correspondence
between low- and high-resolution patterns dependent on the orientation. In the
second case, the input data are insufficient to recognize the rotation of
coordinates in the experiment with the spectral nudging. In both cases, the
accuracy of the CNNs deteriorates if the equivariance is forced to be imposed,
and the usage of conventional CNNs may be justified even though vector is
processed as a quantity having no direction.
Related papers
- Revisiting Data Augmentation for Rotational Invariance in Convolutional
Neural Networks [0.29127054707887967]
We investigate how best to include rotational invariance in a CNN for image classification.
Our experiments show that networks trained with data augmentation alone can classify rotated images nearly as well as in the normal unrotated case.
arXiv Detail & Related papers (2023-10-12T15:53:24Z) - The Lie Derivative for Measuring Learned Equivariance [84.29366874540217]
We study the equivariance properties of hundreds of pretrained models, spanning CNNs, transformers, and Mixer architectures.
We find that many violations of equivariance can be linked to spatial aliasing in ubiquitous network layers, such as pointwise non-linearities.
For example, transformers can be more equivariant than convolutional neural networks after training.
arXiv Detail & Related papers (2022-10-06T15:20:55Z) - Learning Invariant Representations for Equivariant Neural Networks Using
Orthogonal Moments [9.680414207552722]
The convolutional layers of standard convolutional neural networks (CNNs) are equivariant to translation.
Recently, a new class of CNNs is proposed in which the conventional layers of CNNs are replaced with equivariant convolution, pooling, and batch-normalization layers.
arXiv Detail & Related papers (2022-09-22T11:48:39Z) - Equivariance Discovery by Learned Parameter-Sharing [153.41877129746223]
We study how to discover interpretable equivariances from data.
Specifically, we formulate this discovery process as an optimization problem over a model's parameter-sharing schemes.
Also, we theoretically analyze the method for Gaussian data and provide a bound on the mean squared gap between the studied discovery scheme and the oracle scheme.
arXiv Detail & Related papers (2022-04-07T17:59:19Z) - Revisiting Transformation Invariant Geometric Deep Learning: Are Initial
Representations All You Need? [80.86819657126041]
We show that transformation-invariant and distance-preserving initial representations are sufficient to achieve transformation invariance.
Specifically, we realize transformation-invariant and distance-preserving initial point representations by modifying multi-dimensional scaling.
We prove that TinvNN can strictly guarantee transformation invariance, being general and flexible enough to be combined with the existing neural networks.
arXiv Detail & Related papers (2021-12-23T03:52:33Z) - Quantised Transforming Auto-Encoders: Achieving Equivariance to
Arbitrary Transformations in Deep Networks [23.673155102696338]
Convolutional Neural Networks (CNNs) are equivariant to image translation.
We propose an auto-encoder architecture whose embedding obeys an arbitrary set of equivariance relations simultaneously.
We demonstrate results of successful re-rendering of transformed versions of input images on several datasets.
arXiv Detail & Related papers (2021-11-25T02:26:38Z) - Nonlinearities in Steerable SO(2)-Equivariant CNNs [7.552100672006172]
We apply harmonic distortion analysis to illuminate the effect of nonlinearities on representations of SO(2).
We develop a novel FFT-based algorithm for computing representations of non-linearly transformed activations.
In experiments with 2D and 3D data, we obtain results that compare favorably to the state-of-the-art in terms of accuracy while continuous symmetry and exact equivariance.
arXiv Detail & Related papers (2021-09-14T17:53:45Z) - Group Equivariant Subsampling [60.53371517247382]
Subsampling is used in convolutional neural networks (CNNs) in the form of pooling or strided convolutions.
We first introduce translation equivariant subsampling/upsampling layers that can be used to construct exact translation equivariant CNNs.
We then generalise these layers beyond translations to general groups, thus proposing group equivariant subsampling/upsampling.
arXiv Detail & Related papers (2021-06-10T16:14:00Z) - PDO-e$\text{S}^\text{2}$CNNs: Partial Differential Operator Based
Equivariant Spherical CNNs [77.53203546732664]
We use partial differential operators to design a spherical equivariant CNN, PDO-e$textStext2$CNN, which is exactly rotation equivariant in the continuous domain.
In experiments, PDO-e$textStext2$CNNs show greater parameter efficiency and outperform other spherical CNNs significantly on several tasks.
arXiv Detail & Related papers (2021-04-08T07:54:50Z) - Learning Equivariant Representations [10.745691354609738]
Convolutional neural networks (CNNs) are successful examples of this principle.
We propose equivariant models for different transformations defined by groups of symmetries.
These models leverage symmetries in the data to reduce sample and model complexity and improve generalization performance.
arXiv Detail & Related papers (2020-12-04T18:46:17Z) - Learning Invariances in Neural Networks [51.20867785006147]
We show how to parameterize a distribution over augmentations and optimize the training loss simultaneously with respect to the network parameters and augmentation parameters.
We can recover the correct set and extent of invariances on image classification, regression, segmentation, and molecular property prediction from a large space of augmentations.
arXiv Detail & Related papers (2020-10-22T17:18:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.