Scale Equivariant U-Net
- URL: http://arxiv.org/abs/2210.04508v1
- Date: Mon, 10 Oct 2022 09:19:40 GMT
- Title: Scale Equivariant U-Net
- Authors: Mateus Sangalli (CMM), Samy Blusseau (CMM), Santiago Velasco-Forero
(CMM), Jesus Angulo (CMM)
- Abstract summary: This paper introduces the Scale Equivariant U-Net (SEU-Net), a U-Net that is made approximately equivariant to a semigroup of scales and translations.
The proposed SEU-Net is trained for semantic segmentation of the Oxford Pet IIIT and the DIC-C2DH-HeLa dataset for cell segmentation.
The generalization metric to unseen scales is dramatically improved in comparison to the U-Net, even when the U-Net is trained with scale jittering.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In neural networks, the property of being equivariant to transformations
improves generalization when the corresponding symmetry is present in the data.
In particular, scale-equivariant networks are suited to computer vision tasks
where the same classes of objects appear at different scales, like in most
semantic segmentation tasks. Recently, convolutional layers equivariant to a
semigroup of scalings and translations have been proposed. However, the
equivariance of subsampling and upsampling has never been explicitly studied
even though they are necessary building blocks in some segmentation
architectures. The U-Net is a representative example of such architectures,
which includes the basic elements used for state-of-the-art semantic
segmentation. Therefore, this paper introduces the Scale Equivariant U-Net
(SEU-Net), a U-Net that is made approximately equivariant to a semigroup of
scales and translations through careful application of subsampling and
upsampling layers and the use of aforementioned scale-equivariant layers.
Moreover, a scale-dropout is proposed in order to improve generalization to
different scales in approximately scale-equivariant architectures. The proposed
SEU-Net is trained for semantic segmentation of the Oxford Pet IIIT and the
DIC-C2DH-HeLa dataset for cell segmentation. The generalization metric to
unseen scales is dramatically improved in comparison to the U-Net, even when
the U-Net is trained with scale jittering, and to a scale-equivariant
architecture that does not perform upsampling operators inside the equivariant
pipeline. The scale-dropout induces better generalization on the
scale-equivariant models in the Pet experiment, but not on the cell
segmentation experiment.
Related papers
- Deep Neural Networks with Efficient Guaranteed Invariances [77.99182201815763]
We address the problem of improving the performance and in particular the sample complexity of deep neural networks.
Group-equivariant convolutions are a popular approach to obtain equivariant representations.
We propose a multi-stream architecture, where each stream is invariant to a different transformation.
arXiv Detail & Related papers (2023-03-02T20:44:45Z) - Moving Frame Net: SE(3)-Equivariant Network for Volumes [0.0]
A rotation and translation equivariant neural network for image data was proposed based on the moving frames approach.
We significantly improve that approach by reducing the computation of moving frames to only one, at the input stage.
Our trained model overperforms the benchmarks in the medical volume classification of most of the tested datasets from MedMNIST3D.
arXiv Detail & Related papers (2022-11-07T10:25:38Z) - SemAffiNet: Semantic-Affine Transformation for Point Cloud Segmentation [94.11915008006483]
We propose SemAffiNet for point cloud semantic segmentation.
We conduct extensive experiments on the ScanNetV2 and NYUv2 datasets.
arXiv Detail & Related papers (2022-05-26T17:00:23Z) - Equivariance versus Augmentation for Spherical Images [0.7388859384645262]
We analyze the role of rotational equivariance in convolutional neural networks (CNNs) applied to spherical images.
We compare the performance of the group equivariant networks known as S2CNNs and standard non-equivariant CNNs trained with an increasing amount of data augmentation.
arXiv Detail & Related papers (2022-02-08T16:49:30Z) - Improving the Sample-Complexity of Deep Classification Networks with
Invariant Integration [77.99182201815763]
Leveraging prior knowledge on intraclass variance due to transformations is a powerful method to improve the sample complexity of deep neural networks.
We propose a novel monomial selection algorithm based on pruning methods to allow an application to more complex problems.
We demonstrate the improved sample complexity on the Rotated-MNIST, SVHN and CIFAR-10 datasets.
arXiv Detail & Related papers (2022-02-08T16:16:11Z) - Robust Training of Neural Networks using Scale Invariant Architectures [70.67803417918854]
In contrast to SGD, adaptive gradient methods like Adam allow robust training of modern deep networks.
We show that this general approach is robust to rescaling of parameter and loss.
We design a scale invariant version of BERT, called SIBERT, which when trained simply by vanilla SGD achieves performance comparable to BERT trained by adaptive methods like Adam.
arXiv Detail & Related papers (2022-02-02T11:58:56Z) - Frame Averaging for Invariant and Equivariant Network Design [50.87023773850824]
We introduce Frame Averaging (FA), a framework for adapting known (backbone) architectures to become invariant or equivariant to new symmetry types.
We show that FA-based models have maximal expressive power in a broad setting.
We propose a new class of universal Graph Neural Networks (GNNs), universal Euclidean motion invariant point cloud networks, and Euclidean motion invariant Message Passing (MP) GNNs.
arXiv Detail & Related papers (2021-10-07T11:05:23Z) - Group Equivariant Subsampling [60.53371517247382]
Subsampling is used in convolutional neural networks (CNNs) in the form of pooling or strided convolutions.
We first introduce translation equivariant subsampling/upsampling layers that can be used to construct exact translation equivariant CNNs.
We then generalise these layers beyond translations to general groups, thus proposing group equivariant subsampling/upsampling.
arXiv Detail & Related papers (2021-06-10T16:14:00Z) - Scale-covariant and scale-invariant Gaussian derivative networks [0.0]
This paper presents a hybrid approach between scale-space theory and deep learning, where a deep learning architecture is constructed by coupling parameterized scale-space operations in cascade.
It is demonstrated that the resulting approach allows for scale generalization, enabling good performance for classifying patterns at scales not present in the training data.
arXiv Detail & Related papers (2020-11-30T13:15:10Z) - Scale Equivariance Improves Siamese Tracking [1.7188280334580197]
Siamese trackers turn tracking into similarity estimation between a template and the candidate regions in the frame.
Non-translation-equivariant architectures induce a positional bias during training.
We present SE-SiamFC, a scale-equivariant variant of SiamFC built according to the recipe.
arXiv Detail & Related papers (2020-07-17T16:55:51Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.