Delving Deeper into Anti-aliasing in ConvNets
- URL: http://arxiv.org/abs/2008.09604v1
- Date: Fri, 21 Aug 2020 17:56:04 GMT
- Title: Delving Deeper into Anti-aliasing in ConvNets
- Authors: Xueyan Zou, Fanyi Xiao, Zhiding Yu, Yong Jae Lee
- Abstract summary: Aliasing refers to the phenomenon that high frequency signals degenerate into completely different ones after sampling.
We propose an adaptive content-aware low-pass filtering layer, which predicts separate filter weights for each spatial location and channel group.
- Score: 42.82751522973616
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Aliasing refers to the phenomenon that high frequency signals degenerate into
completely different ones after sampling. It arises as a problem in the context
of deep learning as downsampling layers are widely adopted in deep
architectures to reduce parameters and computation. The standard solution is to
apply a low-pass filter (e.g., Gaussian blur) before downsampling. However, it
can be suboptimal to apply the same filter across the entire content, as the
frequency of feature maps can vary across both spatial locations and feature
channels. To tackle this, we propose an adaptive content-aware low-pass
filtering layer, which predicts separate filter weights for each spatial
location and channel group of the input feature maps. We investigate the
effectiveness and generalization of the proposed method across multiple tasks
including ImageNet classification, COCO instance segmentation, and Cityscapes
semantic segmentation. Qualitative and quantitative results demonstrate that
our approach effectively adapts to the different feature frequencies to avoid
aliasing while preserving useful information for recognition. Code is available
at https://maureenzou.github.io/ddac/.
Related papers
- CasDyF-Net: Image Dehazing via Cascaded Dynamic Filters [0.0]
Image dehazing aims to restore image clarity and visual quality by reducing atmospheric scattering and absorption effects.
Inspired by dynamic filtering, we propose using cascaded dynamic filters to create a multi-branch network.
Experiments on RESIDE, Haze4K, and O-Haze datasets validate our method's effectiveness.
arXiv Detail & Related papers (2024-09-13T03:20:38Z) - Neural Gaussian Scale-Space Fields [60.668800252986976]
We present an efficient method to learn the continuous, anisotropic Gaussian scale space of an arbitrary signal.
Our approach is trained self-supervised, i.e., training does not require any manual filtering.
Our neural Gaussian scale-space fields faithfully capture multiscale representations across a broad range of modalities.
arXiv Detail & Related papers (2024-05-31T16:26:08Z) - Cross-Space Adaptive Filter: Integrating Graph Topology and Node
Attributes for Alleviating the Over-smoothing Problem [39.347616859256256]
A Graph Convolutional Network (GCN) uses a low-pass filter to extract low-frequency signals from graph topology.
Various methods have been proposed to create an adaptive filter by incorporating an extra filter extracted from the graph topology.
We propose a cross-space adaptive filter, called CSF, to produce the adaptive-frequency information extracted from both the topology and attribute spaces.
arXiv Detail & Related papers (2024-01-26T14:02:29Z) - Filter Pruning for Efficient CNNs via Knowledge-driven Differential
Filter Sampler [103.97487121678276]
Filter pruning simultaneously accelerates the computation and reduces the memory overhead of CNNs.
We propose a novel Knowledge-driven Differential Filter Sampler(KDFS) with Masked Filter Modeling(MFM) framework for filter pruning.
arXiv Detail & Related papers (2023-07-01T02:28:41Z) - High-fidelity Pseudo-labels for Boosting Weakly-Supervised Segmentation [17.804090651425955]
Image-level weakly-supervised segmentation (WSSS) reduces the usually vast data annotation cost by surrogate segmentation masks during training.
Our work is based on two techniques for improving CAMs; importance sampling, which is a substitute for GAP, and the feature similarity loss.
We reformulate both techniques based on binomial posteriors of multiple independent binary problems.
This has two benefits; their performance is improved and they become more general, resulting in an add-on method that can boost virtually any WSSS method.
arXiv Detail & Related papers (2023-04-05T17:43:57Z) - Network Pruning via Feature Shift Minimization [8.593369249204132]
We propose a novel Feature Shift Minimization (FSM) method to compress CNN models, which evaluates the feature shift by converging the information of both features and filters.
The proposed method yields state-of-the-art performance on various benchmark networks and datasets, verified by extensive experiments.
arXiv Detail & Related papers (2022-07-06T12:50:26Z) - Message Passing in Graph Convolution Networks via Adaptive Filter Banks [81.12823274576274]
We present a novel graph convolution operator, termed BankGCN.
It decomposes multi-channel signals on graphs into subspaces and handles particular information in each subspace with an adapted filter.
It achieves excellent performance in graph classification on a collection of benchmark graph datasets.
arXiv Detail & Related papers (2021-06-18T04:23:34Z) - Resolution learning in deep convolutional networks using scale-space
theory [31.275270391367425]
Resolution in deep convolutional neural networks (CNNs) is typically bounded by the receptive field size through filter sizes, and subsampling layers or strided convolutions on feature maps.
We propose to do away with hard-coded resolution hyper- parameters and aim to learn the appropriate resolution from data.
We use scale-space theory to obtain a self-similar parametrization of filters and make use of the N-Jet: a truncated Taylor series to approximate a filter by a learned combination of Gaussian derivative filters.
arXiv Detail & Related papers (2021-06-07T08:23:02Z) - Unsharp Mask Guided Filtering [53.14430987860308]
The goal of this paper is guided image filtering, which emphasizes the importance of structure transfer during filtering.
We propose a new and simplified formulation of the guided filter inspired by unsharp masking.
Our formulation enjoys a filtering prior to a low-pass filter and enables explicit structure transfer by estimating a single coefficient.
arXiv Detail & Related papers (2021-06-02T19:15:34Z) - Training Interpretable Convolutional Neural Networks by Differentiating
Class-specific Filters [64.46270549587004]
Convolutional neural networks (CNNs) have been successfully used in a range of tasks.
CNNs are often viewed as "black-box" and lack of interpretability.
We propose a novel strategy to train interpretable CNNs by encouraging class-specific filters.
arXiv Detail & Related papers (2020-07-16T09:12:26Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.