Riesz feature representation: scale equivariant scattering network for
classification tasks
- URL: http://arxiv.org/abs/2307.08467v2
- Date: Thu, 11 Jan 2024 13:38:29 GMT
- Title: Riesz feature representation: scale equivariant scattering network for
classification tasks
- Authors: Tin Barisin and Jesus Angulo and Katja Schladitz and Claudia Redenbach
- Abstract summary: Scattering networks yield powerful hierarchical image descriptors which do not require lengthy training.
They rely on sampling the scale dimension.
In this work, we define an alternative feature representation based on the Riesz transform.
- Score: 0.6827423171182154
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Scattering networks yield powerful and robust hierarchical image descriptors
which do not require lengthy training and which work well with very few
training data. However, they rely on sampling the scale dimension. Hence, they
become sensitive to scale variations and are unable to generalize to unseen
scales. In this work, we define an alternative feature representation based on
the Riesz transform. We detail and analyze the mathematical foundations behind
this representation. In particular, it inherits scale equivariance from the
Riesz transform and completely avoids sampling of the scale dimension.
Additionally, the number of features in the representation is reduced by a
factor four compared to scattering networks. Nevertheless, our representation
performs comparably well for texture classification with an interesting
addition: scale equivariance. Our method yields superior performance when
dealing with scales outside of those covered by the training dataset. The
usefulness of the equivariance property is demonstrated on the digit
classification task, where accuracy remains stable even for scales four times
larger than the one chosen for training. As a second example, we consider
classification of textures.
Related papers
- Balancing Logit Variation for Long-tailed Semantic Segmentation [28.92929059563813]
We introduce category-wise variation into the network predictions in the training phase.
We close the gap between the feature areas of different categories, resulting in a more balanced representation.
Our method manifests itself in strong generalizability to various datasets and task settings.
arXiv Detail & Related papers (2023-06-03T09:19:24Z) - Riesz networks: scale invariant neural networks in a single forward pass [0.7673339435080445]
We introduce the Riesz network, a novel scale invariant neural network.
As an application example, we consider detecting and segmenting cracks in tomographic images of concrete.
We then validate its performance in segmenting simulated and real tomographic images featuring a wide range of crack widths.
arXiv Detail & Related papers (2023-05-08T12:39:49Z) - Self-similarity Driven Scale-invariant Learning for Weakly Supervised
Person Search [66.95134080902717]
We propose a novel one-step framework, named Self-similarity driven Scale-invariant Learning (SSL)
We introduce a Multi-scale Exemplar Branch to guide the network in concentrating on the foreground and learning scale-invariant features.
Experiments on PRW and CUHK-SYSU databases demonstrate the effectiveness of our method.
arXiv Detail & Related papers (2023-02-25T04:48:11Z) - Just a Matter of Scale? Reevaluating Scale Equivariance in Convolutional
Neural Networks [3.124871781422893]
Convolutional networks are not equivariant to variations in scale and fail to generalize to objects of different sizes.
We introduce a new family of models that applies many re-scaled kernels with shared weights in parallel and then selects the most appropriate one.
Our experimental results on STIR show that both the existing and proposed approaches can improve generalization across scales compared to standard convolutions.
arXiv Detail & Related papers (2022-11-18T15:27:05Z) - Scale Equivariant U-Net [0.0]
This paper introduces the Scale Equivariant U-Net (SEU-Net), a U-Net that is made approximately equivariant to a semigroup of scales and translations.
The proposed SEU-Net is trained for semantic segmentation of the Oxford Pet IIIT and the DIC-C2DH-HeLa dataset for cell segmentation.
The generalization metric to unseen scales is dramatically improved in comparison to the U-Net, even when the U-Net is trained with scale jittering.
arXiv Detail & Related papers (2022-10-10T09:19:40Z) - The Lie Derivative for Measuring Learned Equivariance [84.29366874540217]
We study the equivariance properties of hundreds of pretrained models, spanning CNNs, transformers, and Mixer architectures.
We find that many violations of equivariance can be linked to spatial aliasing in ubiquitous network layers, such as pointwise non-linearities.
For example, transformers can be more equivariant than convolutional neural networks after training.
arXiv Detail & Related papers (2022-10-06T15:20:55Z) - Large-Margin Representation Learning for Texture Classification [67.94823375350433]
This paper presents a novel approach combining convolutional layers (CLs) and large-margin metric learning for training supervised models on small datasets for texture classification.
The experimental results on texture and histopathologic image datasets have shown that the proposed approach achieves competitive accuracy with lower computational cost and faster convergence when compared to equivalent CNNs.
arXiv Detail & Related papers (2022-06-17T04:07:45Z) - Do Deep Networks Transfer Invariances Across Classes? [123.84237389985236]
We show how a generative approach for learning the nuisance transformations can help transfer invariances across classes.
Our results provide one explanation for why classifiers generalize poorly on unbalanced and longtailed distributions.
arXiv Detail & Related papers (2022-03-18T04:38:18Z) - Feature Generation for Long-tail Classification [36.186909933006675]
We show how to generate meaningful features by estimating the tail category's distribution.
We also present a qualitative analysis of generated features using t-SNE visualizations and analyze the nearest neighbors used to calibrate the tail class distributions.
arXiv Detail & Related papers (2021-11-10T21:34:29Z) - Learning Invariances in Neural Networks [51.20867785006147]
We show how to parameterize a distribution over augmentations and optimize the training loss simultaneously with respect to the network parameters and augmentation parameters.
We can recover the correct set and extent of invariances on image classification, regression, segmentation, and molecular property prediction from a large space of augmentations.
arXiv Detail & Related papers (2020-10-22T17:18:48Z) - Category-Learning with Context-Augmented Autoencoder [63.05016513788047]
Finding an interpretable non-redundant representation of real-world data is one of the key problems in Machine Learning.
We propose a novel method of using data augmentations when training autoencoders.
We train a Variational Autoencoder in such a way, that it makes transformation outcome predictable by auxiliary network.
arXiv Detail & Related papers (2020-10-10T14:04:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.