Defects of Convolutional Decoder Networks in Frequency Representation
- URL: http://arxiv.org/abs/2210.09020v2
- Date: Fri, 1 Dec 2023 12:39:01 GMT
- Title: Defects of Convolutional Decoder Networks in Frequency Representation
- Authors: Ling Tang, Wen Shen, Zhanpeng Zhou, Yuefeng Chen, Quanshi Zhang
- Abstract summary: We prove the representation defects of a cascaded convolutional decoder network.
We conduct the discrete Fourier transform on each channel of the feature map in an intermediate layer of the decoder network.
- Score: 34.70224140460288
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this paper, we prove the representation defects of a cascaded
convolutional decoder network, considering the capacity of representing
different frequency components of an input sample. We conduct the discrete
Fourier transform on each channel of the feature map in an intermediate layer
of the decoder network. Then, we extend the 2D circular convolution theorem to
represent the forward and backward propagations through convolutional layers in
the frequency domain. Based on this, we prove three defects in representing
feature spectrums. First, we prove that the convolution operation, the
zero-padding operation, and a set of other settings all make a convolutional
decoder network more likely to weaken high-frequency components. Second, we
prove that the upsampling operation generates a feature spectrum, in which
strong signals repetitively appear at certain frequencies. Third, we prove that
if the frequency components in the input sample and frequency components in the
target output for regression have a small shift, then the decoder usually
cannot be effectively learned.
Related papers
- F2former: When Fractional Fourier Meets Deep Wiener Deconvolution and Selective Frequency Transformer for Image Deblurring [8.296475046681696]
We propose a novel approach based on the Fractional Fourier Transform (FRFT), a unified spatial-frequency representation.
We show that the performance of our proposed method is superior to other state-of-the-art (SOTA) approaches.
arXiv Detail & Related papers (2024-09-03T17:05:12Z) - FINER: Flexible spectral-bias tuning in Implicit NEural Representation
by Variable-periodic Activation Functions [40.80112550091512]
Implicit Neural Representation is causing a revolution in the field of signal processing.
Current INR techniques suffer from a restricted capability to tune their supported frequency set.
We propose variable-periodic activation functions, for which we propose FINER.
We demonstrate the capabilities of FINER in the contexts of 2D image fitting, 3D signed distance field representation, and 5D neural fields radiance optimization.
arXiv Detail & Related papers (2023-12-05T02:23:41Z) - Neural Fourier Filter Bank [18.52741992605852]
We present a novel method to provide efficient and highly detailed reconstructions.
Inspired by wavelets, we learn a neural field that decompose the signal both spatially and frequency-wise.
arXiv Detail & Related papers (2022-12-04T03:45:08Z) - Transform Once: Efficient Operator Learning in Frequency Domain [69.74509540521397]
We study deep neural networks designed to harness the structure in frequency domain for efficient learning of long-range correlations in space or time.
This work introduces a blueprint for frequency domain learning through a single transform: transform once (T1)
arXiv Detail & Related papers (2022-11-26T01:56:05Z) - Trainable Wavelet Neural Network for Non-Stationary Signals [0.0]
This work introduces a wavelet neural network to learn a filter-bank specialized to fit non-stationary signals and improve interpretability and performance for digital signal processing.
The network uses a wavelet transform as the first layer of a neural network where the convolution is a parameterized function of the complex Morlet wavelet.
arXiv Detail & Related papers (2022-05-06T16:41:27Z) - Adaptive Frequency Learning in Two-branch Face Forgery Detection [66.91715092251258]
We propose Adaptively learn Frequency information in the two-branch Detection framework, dubbed AFD.
We liberate our network from the fixed frequency transforms, and achieve better performance with our data- and task-dependent transform layers.
arXiv Detail & Related papers (2022-03-27T14:25:52Z) - iSTFTNet: Fast and Lightweight Mel-Spectrogram Vocoder Incorporating
Inverse Short-Time Fourier Transform [38.271530231451834]
A mel-spectrogram vocoder must solve three inverse problems: recovery of the original-scale magnitude spectrogram, phase reconstruction, and frequency-to-time conversion.
A typical convolutional mel-spectrogram vocoder solves these problems jointly and implicitly using a convolutional neural network.
We propose iSTFTNet, which replaces some output-side layers of the mel-spectrogram vocoder with the inverse short-time Fourier transform.
arXiv Detail & Related papers (2022-03-04T16:05:48Z) - Variational Autoencoders: A Harmonic Perspective [79.49579654743341]
We study Variational Autoencoders (VAEs) from the perspective of harmonic analysis.
We show that the encoder variance of a VAE controls the frequency content of the functions parameterised by the VAE encoder and decoder neural networks.
arXiv Detail & Related papers (2021-05-31T10:39:25Z) - Modulated Periodic Activations for Generalizable Local Functional
Representations [113.64179351957888]
We present a new representation that generalizes to multiple instances and achieves state-of-the-art fidelity.
Our approach produces general functional representations of images, videos and shapes, and achieves higher reconstruction quality than prior works that are optimized for a single signal.
arXiv Detail & Related papers (2021-04-08T17:59:04Z) - Conditioning Trick for Training Stable GANs [70.15099665710336]
We propose a conditioning trick, called difference departure from normality, applied on the generator network in response to instability issues during GAN training.
We force the generator to get closer to the departure from normality function of real samples computed in the spectral domain of Schur decomposition.
arXiv Detail & Related papers (2020-10-12T16:50:22Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.