Spatial Frequency Bias in Convolutional Generative Adversarial Networks
- URL: http://arxiv.org/abs/2010.01473v3
- Date: Fri, 18 Dec 2020 08:43:19 GMT
- Title: Spatial Frequency Bias in Convolutional Generative Adversarial Networks
- Authors: Mahyar Khayatkhoei, Ahmed Elgammal
- Abstract summary: We show that the ability of convolutional GANs to learn a distribution is significantly affected by the spatial frequency of the underlying carrier signal.
We show that this bias is not merely a result of the scarcity of high frequencies in natural images, rather, it is a systemic bias hindering the learning of high frequencies regardless of their prominence in a dataset.
- Score: 14.564246294896396
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: As the success of Generative Adversarial Networks (GANs) on natural images
quickly propels them into various real-life applications across different
domains, it becomes more and more important to clearly understand their
limitations. Specifically, understanding GANs' capability across the full
spectrum of spatial frequencies, i.e. beyond the low-frequency dominant
spectrum of natural images, is critical for assessing the reliability of GAN
generated data in any detail-sensitive application (e.g. denoising, filling and
super-resolution in medical and satellite images). In this paper, we show that
the ability of convolutional GANs to learn a distribution is significantly
affected by the spatial frequency of the underlying carrier signal, that is,
GANs have a bias against learning high spatial frequencies. Crucially, we show
that this bias is not merely a result of the scarcity of high frequencies in
natural images, rather, it is a systemic bias hindering the learning of high
frequencies regardless of their prominence in a dataset. Furthermore, we
explain why large-scale GANs' ability to generate fine details on natural
images does not exclude them from the adverse effects of this bias. Finally, we
propose a method for manipulating this bias with minimal computational
overhead. This method can be used to explicitly direct computational resources
towards any specific spatial frequency of interest in a dataset, extending the
flexibility of GANs.
Related papers
- FreqINR: Frequency Consistency for Implicit Neural Representation with Adaptive DCT Frequency Loss [5.349799154834945]
This paper introduces Frequency Consistency for Implicit Neural Representation (FreqINR), an innovative Arbitrary-scale Super-resolution method.
During training, we employ Adaptive Discrete Cosine Transform Frequency Loss (ADFL) to minimize the frequency gap between HR and ground-truth images.
During inference, we extend the receptive field to preserve spectral coherence between low-resolution (LR) and ground-truth images.
arXiv Detail & Related papers (2024-08-25T03:53:17Z) - Frequency-Aware Deepfake Detection: Improving Generalizability through
Frequency Space Learning [81.98675881423131]
This research addresses the challenge of developing a universal deepfake detector that can effectively identify unseen deepfake images.
Existing frequency-based paradigms have relied on frequency-level artifacts introduced during the up-sampling in GAN pipelines to detect forgeries.
We introduce a novel frequency-aware approach called FreqNet, centered around frequency domain learning, specifically designed to enhance the generalizability of deepfake detectors.
arXiv Detail & Related papers (2024-03-12T01:28:00Z) - A Scalable Walsh-Hadamard Regularizer to Overcome the Low-degree
Spectral Bias of Neural Networks [79.28094304325116]
Despite the capacity of neural nets to learn arbitrary functions, models trained through gradient descent often exhibit a bias towards simpler'' functions.
We show how this spectral bias towards low-degree frequencies can in fact hurt the neural network's generalization on real-world datasets.
We propose a new scalable functional regularization scheme that aids the neural network to learn higher degree frequencies.
arXiv Detail & Related papers (2023-05-16T20:06:01Z) - When Neural Networks Fail to Generalize? A Model Sensitivity Perspective [82.36758565781153]
Domain generalization (DG) aims to train a model to perform well in unseen domains under different distributions.
This paper considers a more realistic yet more challenging scenario, namely Single Domain Generalization (Single-DG)
We empirically ascertain a property of a model that correlates strongly with its generalization that we coin as "model sensitivity"
We propose a novel strategy of Spectral Adversarial Data Augmentation (SADA) to generate augmented images targeted at the highly sensitive frequencies.
arXiv Detail & Related papers (2022-12-01T20:15:15Z) - FreGAN: Exploiting Frequency Components for Training GANs under Limited
Data [3.5459430566117893]
Training GANs under limited data often leads to discriminator overfitting and memorization issues.
This paper proposes FreGAN, which raises the model's frequency awareness and draws more attention to producing high-frequency signals.
In addition to exploiting both real and generated images' frequency information, we also involve the frequency signals of real images as a self-supervised constraint.
arXiv Detail & Related papers (2022-10-11T14:02:52Z) - On the Frequency Bias of Generative Models [61.60834513380388]
We analyze proposed measures against high-frequency artifacts in state-of-the-art GAN training.
We find that none of the existing approaches can fully resolve spectral artifacts yet.
Our results suggest that there is great potential in improving the discriminator.
arXiv Detail & Related papers (2021-11-03T18:12:11Z) - Spectral Bias in Practice: The Role of Function Frequency in
Generalization [10.7218588164913]
We propose methodologies for measuring spectral bias in modern image classification networks.
We find that networks that generalize well strike a balance between having enough complexity to fit the data while being simple enough to avoid overfitting.
Our work enables measuring and ultimately controlling the spectral behavior of neural networks used for image classification.
arXiv Detail & Related papers (2021-10-06T00:16:10Z) - Low-Rank Subspaces in GANs [101.48350547067628]
This work introduces low-rank subspaces that enable more precise control of GAN generation.
LowRankGAN is able to find the low-dimensional representation of attribute manifold.
Experiments on state-of-the-art GAN models (including StyleGAN2 and BigGAN) trained on various datasets demonstrate the effectiveness of our LowRankGAN.
arXiv Detail & Related papers (2021-06-08T16:16:32Z) - Are High-Frequency Components Beneficial for Training of Generative
Adversarial Networks [11.226288436817956]
Generative Adversarial Networks (GANs) have the ability to generate realistic images that are visually indistinguishable from real images.
Recent studies of the image spectrum have demonstrated that generated and real images share significant differences at high frequency.
We propose two preprocessing methods eliminating high-frequency differences in GANs training.
arXiv Detail & Related papers (2021-03-20T04:37:06Z) - Focal Frequency Loss for Image Reconstruction and Synthesis [125.7135706352493]
We show that narrowing gaps in the frequency domain can ameliorate image reconstruction and synthesis quality further.
We propose a novel focal frequency loss, which allows a model to adaptively focus on frequency components that are hard to synthesize.
arXiv Detail & Related papers (2020-12-23T17:32:04Z) - Local Convolutions Cause an Implicit Bias towards High Frequency
Adversarial Examples [15.236551149698496]
Adversarial Attacks are still a significant challenge for neural networks.
Recent work has shown that adversarial perturbations typically contain high-frequency features.
We hypothesize that the local (i.e. bounded-width) convolutional operations commonly used in current neural networks are implicitly biased to learn high frequency features.
arXiv Detail & Related papers (2020-06-19T23:50:51Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.