How Convolutional Neural Networks Deal with Aliasing
- URL: http://arxiv.org/abs/2102.07757v1
- Date: Mon, 15 Feb 2021 18:52:47 GMT
- Title: How Convolutional Neural Networks Deal with Aliasing
- Authors: Ant\^onio H. Ribeiro and Thomas B. Sch\"on
- Abstract summary: We show that an image classifier CNN while, in principle, capable of implementing anti-aliasing filters, does not prevent aliasing from taking place in the intermediate layers.
In the first, we assess the CNNs capability of distinguishing oscillations at the input, showing that the redundancies in the intermediate channels play an important role in succeeding at the task.
In the second, we show that an image classifier CNN while, in principle, capable of implementing anti-aliasing filters, does not prevent aliasing from taking place in the intermediate layers.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The convolutional neural network (CNN) remains an essential tool in solving
computer vision problems. Standard convolutional architectures consist of
stacked layers of operations that progressively downscale the image. Aliasing
is a well-known side-effect of downsampling that may take place: it causes
high-frequency components of the original signal to become indistinguishable
from its low-frequency components. While downsampling takes place in the
max-pooling layers or in the strided-convolutions in these models, there is no
explicit mechanism that prevents aliasing from taking place in these layers.
Due to the impressive performance of these models, it is natural to suspect
that they, somehow, implicitly deal with this distortion. The question we aim
to answer in this paper is simply: "how and to what extent do CNNs counteract
aliasing?" We explore the question by means of two examples: In the first, we
assess the CNNs capability of distinguishing oscillations at the input, showing
that the redundancies in the intermediate channels play an important role in
succeeding at the task; In the second, we show that an image classifier CNN
while, in principle, capable of implementing anti-aliasing filters, does not
prevent aliasing from taking place in the intermediate layers.
Related papers
- Distance Weighted Trans Network for Image Completion [52.318730994423106]
We propose a new architecture that relies on Distance-based Weighted Transformer (DWT) to better understand the relationships between an image's components.
CNNs are used to augment the local texture information of coarse priors.
DWT blocks are used to recover certain coarse textures and coherent visual structures.
arXiv Detail & Related papers (2023-10-11T12:46:11Z) - Aliasing is a Driver of Adversarial Attacks [35.262520934751]
We investigate the hypothesis that the existence of adversarial perturbations is due in part to aliasing in neural networks.
Our ultimate goal is to increase robustness against adversarial attacks using explainable, non-trained, structural changes only.
Our experimental results show a solid link between anti-aliasing and adversarial attacks.
arXiv Detail & Related papers (2022-12-22T14:52:44Z) - FrequencyLowCut Pooling -- Plug & Play against Catastrophic Overfitting [12.062691258844628]
This paper introduces an aliasing free down-sampling operation which can easily be plugged into any CNN architecture.
Our experiments show, that in combination with simple and fast FGSM adversarial training, our hyper- parameter free operator significantly improves model robustness.
arXiv Detail & Related papers (2022-04-01T14:51:28Z) - Image Quality Assessment using Contrastive Learning [50.265638572116984]
We train a deep Convolutional Neural Network (CNN) using a contrastive pairwise objective to solve the auxiliary problem.
We show through extensive experiments that CONTRIQUE achieves competitive performance when compared to state-of-the-art NR image quality models.
Our results suggest that powerful quality representations with perceptual relevance can be obtained without requiring large labeled subjective image quality datasets.
arXiv Detail & Related papers (2021-10-25T21:01:00Z) - Discriminator-Free Generative Adversarial Attack [87.71852388383242]
Agenerative-based adversarial attacks can get rid of this limitation.
ASymmetric Saliency-based Auto-Encoder (SSAE) generates the perturbations.
The adversarial examples generated by SSAE not only make thewidely-used models collapse, but also achieves good visual quality.
arXiv Detail & Related papers (2021-07-20T01:55:21Z) - Image Restoration by Deep Projected GSURE [115.57142046076164]
Ill-posed inverse problems appear in many image processing applications, such as deblurring and super-resolution.
We propose a new image restoration framework that is based on minimizing a loss function that includes a "projected-version" of the Generalized SteinUnbiased Risk Estimator (GSURE) and parameterization of the latent image by a CNN.
arXiv Detail & Related papers (2021-02-04T08:52:46Z) - The Mind's Eye: Visualizing Class-Agnostic Features of CNNs [92.39082696657874]
We propose an approach to visually interpret CNN features given a set of images by creating corresponding images that depict the most informative features of a specific layer.
Our method uses a dual-objective activation and distance loss, without requiring a generator network nor modifications to the original model.
arXiv Detail & Related papers (2021-01-29T07:46:39Z) - Counterfactual Generative Networks [59.080843365828756]
We propose to decompose the image generation process into independent causal mechanisms that we train without direct supervision.
By exploiting appropriate inductive biases, these mechanisms disentangle object shape, object texture, and background.
We show that the counterfactual images can improve out-of-distribution with a marginal drop in performance on the original classification task.
arXiv Detail & Related papers (2021-01-15T10:23:12Z) - Noise-Equipped Convolutional Neural Networks [15.297063646935078]
Convolutional Neural Network (CNN) has been widely employed in image synthesis and translation tasks.
When a CNN model is fed with a flat input, the transformation degrades into a scaling operation due to the spatial sharing nature of convolution kernels.
arXiv Detail & Related papers (2020-12-09T09:01:45Z) - When to Use Convolutional Neural Networks for Inverse Problems [40.60063929073102]
We show how a convolutional neural network can be viewed as an approximate solution to a convolutional sparse coding problem.
We argue that for some types of inverse problems the CNN approximation breaks down leading to poor performance.
Specifically we identify JPEG artifact reduction and non-rigid trajectory reconstruction as challenging inverse problems for CNNs.
arXiv Detail & Related papers (2020-03-30T21:08:14Z) - Fixed smooth convolutional layer for avoiding checkerboard artifacts in
CNNs [20.242221018089715]
We propose a fixed convolutional layer with an order of smoothness for avoiding checkerboard artifacts in convolutional neural networks (CNNs)
The proposed layer can perfectly prevent checkerboard artifacts caused by strided convolutional layers or upsampling layers including transposed convolutional layers.
The fixed layer are applied to generative adversarial networks (GANs) for the first time.
arXiv Detail & Related papers (2020-02-06T06:36:45Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.