Group Downsampling with Equivariant Anti-aliasing
- URL: http://arxiv.org/abs/2504.17258v1
- Date: Thu, 24 Apr 2025 05:29:51 GMT
- Title: Group Downsampling with Equivariant Anti-aliasing
- Authors: Md Ashiqur Rahman, Raymond A. Yeh,
- Abstract summary: We study the generalization of the uniform downsampling layer for group equivariant architectures.<n>We aim to downsample signals (feature maps) on general finite groups with anti-aliasing.<n> Notably, our method generalizes the notion of downsampling based on classical sampling theory.
- Score: 11.912092139018885
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Downsampling layers are crucial building blocks in CNN architectures, which help to increase the receptive field for learning high-level features and reduce the amount of memory/computation in the model. In this work, we study the generalization of the uniform downsampling layer for group equivariant architectures, e.g., G-CNNs. That is, we aim to downsample signals (feature maps) on general finite groups with anti-aliasing. This involves the following: (a) Given a finite group and a downsampling rate, we present an algorithm to form a suitable choice of subgroup. (b) Given a group and a subgroup, we study the notion of bandlimited-ness and propose how to perform anti-aliasing. Notably, our method generalizes the notion of downsampling based on classical sampling theory. When the signal is on a cyclic group, i.e., periodic, our method recovers the standard downsampling of an ideal low-pass filter followed by a subsampling operation. Finally, we conducted experiments on image classification tasks demonstrating that the proposed downsampling operation improves accuracy, better preserves equivariance, and reduces model size when incorporated into G-equivariant networks
Related papers
- Enhancing Synthetic Oversampling for Imbalanced Datasets Using Proxima-Orion Neighbors and q-Gaussian Weighting Technique [0.16385815610837165]
We propose a novel oversampling algorithm to increase the number of instances of minority class in an imbalanced dataset.
We select two instances, Proxima and Orion, from the set of all minority class instances, based on a combination of relative distance weights and density estimation of majority class instances.
We conduct a comprehensive experiment on 42 datasets extracted from KEEL software and eight datasets from the UCI ML repository to evaluate the usefulness of the proposed (PO-QG) algorithm.
arXiv Detail & Related papers (2025-01-27T05:34:19Z) - The Transferability of Downsamped Sparse Graph Convolutional Networks [12.629928166637]
We introduce a novel downsampling method based on a sparse random graph model and derive an expected upper bound for the transfer error.
Our findings show that smaller original graph sizes, higher expected average degrees, and increased sampling rates contribute to reducing this upper bound.
By incorporating both sparsity and topological similarity into the model, this study establishes an upper bound on the transfer error for downsampling in the training of large-scale sparse graphs.
arXiv Detail & Related papers (2024-08-30T13:19:20Z) - Domain Generalization Guided by Gradient Signal to Noise Ratio of
Parameters [69.24377241408851]
Overfitting to the source domain is a common issue in gradient-based training of deep neural networks.
We propose to base the selection on gradient-signal-to-noise ratio (GSNR) of network's parameters.
arXiv Detail & Related papers (2023-10-11T10:21:34Z) - Sampling-Based Decomposition Algorithms for Arbitrary Tensor Networks [5.672132510411465]
We show how to develop sampling-based alternating least squares (ALS) algorithms for decomposition of tensors into any tensor network (TN) format.
We implement and test two tensor decomposition algorithms that use our sampling framework in a feature extraction experiment.
arXiv Detail & Related papers (2022-10-07T21:46:26Z) - Undersampling is a Minimax Optimal Robustness Intervention in
Nonparametric Classification [28.128464387420216]
We show that learning is fundamentally constrained by a lack of minority group samples.
In particular, in the case of label shift we show that there is always an undersampling algorithm that is minimax optimal.
arXiv Detail & Related papers (2022-05-26T00:35:11Z) - Toward Real-World Super-Resolution via Adaptive Downsampling Models [58.38683820192415]
This study proposes a novel method to simulate an unknown downsampling process without imposing restrictive prior knowledge.
We propose a generalizable low-frequency loss (LFL) in the adversarial training framework to imitate the distribution of target LR images without using any paired examples.
arXiv Detail & Related papers (2021-09-08T06:00:32Z) - Does Adversarial Oversampling Help us? [10.210871872870737]
We propose a three-player adversarial game-based end-to-end method to handle class imbalance in datasets.
Rather than adversarial minority oversampling, we propose an adversarial oversampling (AO) and a data-space oversampling (DO) approach.
The effectiveness of our proposed method has been validated with high-dimensional, highly imbalanced and large-scale multi-class datasets.
arXiv Detail & Related papers (2021-08-20T05:43:17Z) - Correlation Clustering Reconstruction in Semi-Adversarial Models [70.11015369368272]
Correlation Clustering is an important clustering problem with many applications.
We study the reconstruction version of this problem in which one is seeking to reconstruct a latent clustering corrupted by random noise and adversarial modifications.
arXiv Detail & Related papers (2021-08-10T14:46:17Z) - Group Equivariant Subsampling [60.53371517247382]
Subsampling is used in convolutional neural networks (CNNs) in the form of pooling or strided convolutions.
We first introduce translation equivariant subsampling/upsampling layers that can be used to construct exact translation equivariant CNNs.
We then generalise these layers beyond translations to general groups, thus proposing group equivariant subsampling/upsampling.
arXiv Detail & Related papers (2021-06-10T16:14:00Z) - Bandit Samplers for Training Graph Neural Networks [63.17765191700203]
Several sampling algorithms with variance reduction have been proposed for accelerating the training of Graph Convolution Networks (GCNs)
These sampling algorithms are not applicable to more general graph neural networks (GNNs) where the message aggregator contains learned weights rather than fixed weights, such as Graph Attention Networks (GAT)
arXiv Detail & Related papers (2020-06-10T12:48:37Z) - Ensemble Wrapper Subsampling for Deep Modulation Classification [70.91089216571035]
Subsampling of received wireless signals is important for relaxing hardware requirements as well as the computational cost of signal processing algorithms.
We propose a subsampling technique to facilitate the use of deep learning for automatic modulation classification in wireless communication systems.
arXiv Detail & Related papers (2020-05-10T06:11:13Z) - A Group Norm Regularized Factorization Model for Subspace Segmentation [4.926716472066594]
This paper proposes a group norm regularized factorization model (GNRFM) inspired by the LRR model for subspace segmentation.
Specifically, we adopt group norm regularization to make the columns of the factor matrix sparse, thereby achieving a purpose of low rank.
Compared with traditional models and algorithms, the proposed method is faster and more robust to noise, so the final clustering results are better.
arXiv Detail & Related papers (2020-01-08T15:20:51Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.