FreGAN: Exploiting Frequency Components for Training GANs under Limited
Data
- URL: http://arxiv.org/abs/2210.05461v1
- Date: Tue, 11 Oct 2022 14:02:52 GMT
- Title: FreGAN: Exploiting Frequency Components for Training GANs under Limited
Data
- Authors: Mengping Yang, Zhe Wang, Ziqiu Chi, Yanbing Zhang
- Abstract summary: Training GANs under limited data often leads to discriminator overfitting and memorization issues.
This paper proposes FreGAN, which raises the model's frequency awareness and draws more attention to producing high-frequency signals.
In addition to exploiting both real and generated images' frequency information, we also involve the frequency signals of real images as a self-supervised constraint.
- Score: 3.5459430566117893
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Training GANs under limited data often leads to discriminator overfitting and
memorization issues, causing divergent training. Existing approaches mitigate
the overfitting by employing data augmentations, model regularization, or
attention mechanisms. However, they ignore the frequency bias of GANs and take
poor consideration towards frequency information, especially high-frequency
signals that contain rich details. To fully utilize the frequency information
of limited data, this paper proposes FreGAN, which raises the model's frequency
awareness and draws more attention to producing high-frequency signals,
facilitating high-quality generation. In addition to exploiting both real and
generated images' frequency information, we also involve the frequency signals
of real images as a self-supervised constraint, which alleviates the GAN
disequilibrium and encourages the generator to synthesize adequate rather than
arbitrary frequency signals. Extensive results demonstrate the superiority and
effectiveness of our FreGAN in ameliorating generation quality in the low-data
regime (especially when training data is less than 100). Besides, FreGAN can be
seamlessly applied to existing regularization and attention mechanism models to
further boost the performance.
Related papers
- Augmenting Training Data with Vector-Quantized Variational Autoencoder for Classifying RF Signals [9.99212997328053]
This paper proposes the use of a Vector-Quantized Variational Autoencoder (VQ-VAE) to augment training data.
The VQ-VAE model generates high-fidelity synthetic RF signals, increasing the diversity and fidelity of the training dataset.
Our experimental results show that incorporating VQ-VAE-generated data significantly improves the classification accuracy of the baseline model.
arXiv Detail & Related papers (2024-10-23T21:17:45Z) - Tuning Frequency Bias of State Space Models [48.60241978021799]
State space models (SSMs) leverage linear, time-invariant (LTI) systems to learn sequences with long-range dependencies.
We find that SSMs exhibit an implicit bias toward capturing low-frequency components more effectively than high-frequency ones.
arXiv Detail & Related papers (2024-10-02T21:04:22Z) - Mitigating Low-Frequency Bias: Feature Recalibration and Frequency Attention Regularization for Adversarial Robustness [23.77988226456179]
This paper proposes a novel module called High-Frequency Feature Disentanglement and Recalibration (HFDR)
HFDR separates features into high-frequency and low-frequency components and recalibrates the high-frequency feature to capture latent useful semantics.
Extensive experiments showcase the immense potential and superiority of our approach in resisting various white-box attacks, transfer attacks, and showcasing strong generalization capabilities.
arXiv Detail & Related papers (2024-07-04T15:46:01Z) - Frequency-Aware Deepfake Detection: Improving Generalizability through
Frequency Space Learning [81.98675881423131]
This research addresses the challenge of developing a universal deepfake detector that can effectively identify unseen deepfake images.
Existing frequency-based paradigms have relied on frequency-level artifacts introduced during the up-sampling in GAN pipelines to detect forgeries.
We introduce a novel frequency-aware approach called FreqNet, centered around frequency domain learning, specifically designed to enhance the generalizability of deepfake detectors.
arXiv Detail & Related papers (2024-03-12T01:28:00Z) - Adaptive Frequency Learning in Two-branch Face Forgery Detection [66.91715092251258]
We propose Adaptively learn Frequency information in the two-branch Detection framework, dubbed AFD.
We liberate our network from the fixed frequency transforms, and achieve better performance with our data- and task-dependent transform layers.
arXiv Detail & Related papers (2022-03-27T14:25:52Z) - Frequency-bin entanglement from domain-engineered down-conversion [101.18253437732933]
We present a single-pass source of discrete frequency-bin entanglement which does not use filtering or a resonant cavity.
We use a domain-engineered nonlinear crystal to generate an eight-mode frequency-bin entangled source at telecommunication wavelengths.
arXiv Detail & Related papers (2022-01-18T19:00:29Z) - Unsupervised Image Denoising with Frequency Domain Knowledge [2.834895018689047]
Supervised learning-based methods yield robust denoising results, yet they are inherently limited by the need for large-scale datasets.
In this study we propose a frequency-sensitive unsupervised denoising method.
Results using natural and synthetic datasets indicate that our unsupervised learning method augmented with frequency information achieves state-of-the-art denoising performance.
arXiv Detail & Related papers (2021-11-29T07:41:32Z) - Wavelet-Based Network For High Dynamic Range Imaging [64.66969585951207]
Existing methods, such as optical flow based and end-to-end deep learning based solutions, are error-prone either in detail restoration or ghosting artifacts removal.
In this work, we propose a novel frequency-guided end-to-end deep neural network (FNet) to conduct HDR fusion in the frequency domain, and Wavelet Transform (DWT) is used to decompose inputs into different frequency bands.
The low-frequency signals are used to avoid specific ghosting artifacts, while the high-frequency signals are used for preserving details.
arXiv Detail & Related papers (2021-08-03T12:26:33Z) - Encoding Frequency Constraints in Preventive Unit Commitment Using Deep
Learning with Region-of-Interest Active Sampling [8.776029771500689]
This paper presents a generic data-driven framework for frequency-constrained unit commitment (FCUC) under high renewable penetration.
Deep neural networks (DNNs) are trained to predict the frequency response using real data or high-fidelity simulation data.
In the data generation phase, all possible power injections are considered, and a region-of-interests active sampling is proposed to include power injection samples with frequency nadirs closer to the UFLC threshold.
arXiv Detail & Related papers (2021-02-18T19:04:21Z) - Focal Frequency Loss for Image Reconstruction and Synthesis [125.7135706352493]
We show that narrowing gaps in the frequency domain can ameliorate image reconstruction and synthesis quality further.
We propose a novel focal frequency loss, which allows a model to adaptively focus on frequency components that are hard to synthesize.
arXiv Detail & Related papers (2020-12-23T17:32:04Z) - Spatial Frequency Bias in Convolutional Generative Adversarial Networks [14.564246294896396]
We show that the ability of convolutional GANs to learn a distribution is significantly affected by the spatial frequency of the underlying carrier signal.
We show that this bias is not merely a result of the scarcity of high frequencies in natural images, rather, it is a systemic bias hindering the learning of high frequencies regardless of their prominence in a dataset.
arXiv Detail & Related papers (2020-10-04T03:05:29Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.