DISCO: accurate Discrete Scale Convolutions
- URL: http://arxiv.org/abs/2106.02733v1
- Date: Fri, 4 Jun 2021 21:48:09 GMT
- Title: DISCO: accurate Discrete Scale Convolutions
- Authors: Ivan Sosnovik, Artem Moskalev, Arnold Smeulders
- Abstract summary: Scale is often seen as a given, disturbing factor in many vision tasks. When doing so it is one of the factors why we need more data during learning.
We aim for accurate scale-equivariant convolutional neural networks (SE-CNNs) applicable for problems where high granularity of scale and small filter sizes are required.
- Score: 2.1485350418225244
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Scale is often seen as a given, disturbing factor in many vision tasks. When
doing so it is one of the factors why we need more data during learning. In
recent work scale equivariance was added to convolutional neural networks. It
was shown to be effective for a range of tasks. We aim for accurate
scale-equivariant convolutional neural networks (SE-CNNs) applicable for
problems where high granularity of scale and small filter sizes are required.
Current SE-CNNs rely on weight sharing and filter rescaling, the latter of
which is accurate for integer scales only. To reach accurate scale
equivariance, we derive general constraints under which scale-convolution
remains equivariant to discrete rescaling. We find the exact solution for all
cases where it exists, and compute the approximation for the rest. The discrete
scale-convolution pays off, as demonstrated in a new state-of-the-art
classification on MNIST-scale and improving the results on STL-10. With the
same SE scheme, we also improve the computational effort of a scale-equivariant
Siamese tracker on OTB-13.
Related papers
- Improved Generalization of Weight Space Networks via Augmentations [56.571475005291035]
Learning in deep weight spaces (DWS) is an emerging research direction, with applications to 2D and 3D neural fields (INRs, NeRFs)
We empirically analyze the reasons for this overfitting and find that a key reason is the lack of diversity in DWS datasets.
To address this, we explore strategies for data augmentation in weight spaces and propose a MixUp method adapted for weight spaces.
arXiv Detail & Related papers (2024-02-06T15:34:44Z) - Truly Scale-Equivariant Deep Nets with Fourier Layers [14.072558848402362]
In computer vision, models must be able to adapt to changes in image resolution to effectively carry out tasks such as image segmentation.
Recent works have made progress in developing scale-equivariant convolutional neural networks, through weight-sharing and kernel resizing.
We propose a novel architecture based on Fourier layers to achieve truly scale-equivariant deep nets.
arXiv Detail & Related papers (2023-11-06T07:32:27Z) - Riesz networks: scale invariant neural networks in a single forward pass [0.7673339435080445]
We introduce the Riesz network, a novel scale invariant neural network.
As an application example, we consider detecting and segmenting cracks in tomographic images of concrete.
We then validate its performance in segmenting simulated and real tomographic images featuring a wide range of crack widths.
arXiv Detail & Related papers (2023-05-08T12:39:49Z) - Scale-Equivariant Deep Learning for 3D Data [44.52688267348063]
Convolutional neural networks (CNNs) recognize objects regardless of their position in the image.
We propose a scale-equivariant convolutional network layer for three-dimensional data.
Our experiments demonstrate the effectiveness of the proposed method in achieving scale-equivariant for 3D medical image analysis.
arXiv Detail & Related papers (2023-04-12T13:56:12Z) - Rotation-Scale Equivariant Steerable Filters [1.213915839836187]
Digital histology imaging of biopsy tissue can be captured at arbitrary orientation and magnification and stored at different resolutions.
We propose the Rotation-Scale Equivariant Steerable Filter (RSESF), which incorporates steerable filters and scale-space theory.
Our method outperforms other approaches, with much fewer trainable parameters and fewer GPU resources required.
arXiv Detail & Related papers (2023-04-10T14:13:56Z) - Just a Matter of Scale? Reevaluating Scale Equivariance in Convolutional
Neural Networks [3.124871781422893]
Convolutional networks are not equivariant to variations in scale and fail to generalize to objects of different sizes.
We introduce a new family of models that applies many re-scaled kernels with shared weights in parallel and then selects the most appropriate one.
Our experimental results on STIR show that both the existing and proposed approaches can improve generalization across scales compared to standard convolutions.
arXiv Detail & Related papers (2022-11-18T15:27:05Z) - Scaling Structured Inference with Randomization [64.18063627155128]
We propose a family of dynamic programming (RDP) randomized for scaling structured models to tens of thousands of latent states.
Our method is widely applicable to classical DP-based inference.
It is also compatible with automatic differentiation so can be integrated with neural networks seamlessly.
arXiv Detail & Related papers (2021-12-07T11:26:41Z) - Robust Implicit Networks via Non-Euclidean Contractions [63.91638306025768]
Implicit neural networks show improved accuracy and significant reduction in memory consumption.
They can suffer from ill-posedness and convergence instability.
This paper provides a new framework to design well-posed and robust implicit neural networks.
arXiv Detail & Related papers (2021-06-06T18:05:02Z) - BN-invariant sharpness regularizes the training model to better
generalization [72.97766238317081]
We propose a measure of sharpness, BN-Sharpness, which gives consistent value for equivalent networks under BN.
We use the BN-sharpness to regularize the training and design an algorithm to minimize the new regularized objective.
arXiv Detail & Related papers (2021-01-08T10:23:24Z) - AQD: Towards Accurate Fully-Quantized Object Detection [94.06347866374927]
We propose an Accurate Quantized object Detection solution, termed AQD, to get rid of floating-point computation.
Our AQD achieves comparable or even better performance compared with the full-precision counterpart under extremely low-bit schemes.
arXiv Detail & Related papers (2020-07-14T09:07:29Z) - AdamP: Slowing Down the Slowdown for Momentum Optimizers on
Scale-invariant Weights [53.8489656709356]
Normalization techniques are a boon for modern deep learning.
It is often overlooked, however, that the additional introduction of momentum results in a rapid reduction in effective step sizes for scale-invariant weights.
In this paper, we verify that the widely-adopted combination of the two ingredients lead to the premature decay of effective step sizes and sub-optimal model performances.
arXiv Detail & Related papers (2020-06-15T08:35:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.