Learning Non-linear Wavelet Transformation via Normalizing Flow
- URL: http://arxiv.org/abs/2101.11306v1
- Date: Wed, 27 Jan 2021 10:28:51 GMT
- Title: Learning Non-linear Wavelet Transformation via Normalizing Flow
- Authors: Shuo-Hui Li
- Abstract summary: An invertible transformation can be learned by a designed normalizing flow model.
With a factor-out scheme resembling the wavelet downsampling mechanism, one can train normalizing flow models to factor-out variables corresponding to fast patterns.
An analysis of the learned model in terms of low-pass/high-pass filters is given.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Wavelet transformation stands as a cornerstone in modern data analysis and
signal processing. Its mathematical essence is an invertible transformation
that discerns slow patterns from fast patterns in the frequency domain, which
repeats at each level. Such an invertible transformation can be learned by a
designed normalizing flow model. With a factor-out scheme resembling the
wavelet downsampling mechanism, a mutually independent prior, and parameter
sharing along the depth of the network, one can train normalizing flow models
to factor-out variables corresponding to fast patterns at different levels,
thus extending linear wavelet transformations to non-linear learnable models.
In this paper, a concrete way of building such flows is given. Then, a
demonstration of the model's ability in lossless compression task, progressive
loading, and super-resolution (upsampling) task. Lastly, an analysis of the
learned model in terms of low-pass/high-pass filters is given.
Related papers
- Can Looped Transformers Learn to Implement Multi-step Gradient Descent for In-context Learning? [69.4145579827826]
We show a fast flow on the regression loss despite the gradient non-ity algorithms for our convergence landscape.
This is the first theoretical analysis for multi-layer Transformer in this setting.
arXiv Detail & Related papers (2024-10-10T18:29:05Z) - Unsupervised Representation Learning from Sparse Transformation Analysis [79.94858534887801]
We propose to learn representations from sequence data by factorizing the transformations of the latent variables into sparse components.
Input data are first encoded as distributions of latent activations and subsequently transformed using a probability flow model.
arXiv Detail & Related papers (2024-10-07T23:53:25Z) - Transform Once: Efficient Operator Learning in Frequency Domain [69.74509540521397]
We study deep neural networks designed to harness the structure in frequency domain for efficient learning of long-range correlations in space or time.
This work introduces a blueprint for frequency domain learning through a single transform: transform once (T1)
arXiv Detail & Related papers (2022-11-26T01:56:05Z) - Fast Sampling of Diffusion Models via Operator Learning [74.37531458470086]
We use neural operators, an efficient method to solve the probability flow differential equations, to accelerate the sampling process of diffusion models.
Compared to other fast sampling methods that have a sequential nature, we are the first to propose a parallel decoding method.
We show our method achieves state-of-the-art FID of 3.78 for CIFAR-10 and 7.83 for ImageNet-64 in the one-model-evaluation setting.
arXiv Detail & Related papers (2022-11-24T07:30:27Z) - Discrete Denoising Flows [87.44537620217673]
We introduce a new discrete flow-based model for categorical random variables: Discrete Denoising Flows (DDFs)
In contrast with other discrete flow-based models, our model can be locally trained without introducing gradient bias.
We show that DDFs outperform Discrete Flows on modeling a toy example, binary MNIST and Cityscapes segmentation maps, measured in log-likelihood.
arXiv Detail & Related papers (2021-07-24T14:47:22Z) - Wavelet Flow: Fast Training of High Resolution Normalizing Flows [27.661467862732792]
This paper introduces Wavelet Flow, a multi-scale, normalizing flow architecture based on wavelets.
A major advantage of Wavelet Flow is the ability to construct generative models for high resolution data that are impractical with previous models.
arXiv Detail & Related papers (2020-10-26T18:13:43Z) - Haar Wavelet based Block Autoregressive Flows for Trajectories [129.37479472754083]
Prediction of trajectories such as that of pedestrians is crucial to the performance of autonomous agents.
We introduce a novel Haar wavelet based block autoregressive model leveraging split couplings.
We illustrate the advantages of our approach for generating diverse and accurate trajectories on two real-world datasets.
arXiv Detail & Related papers (2020-09-21T13:57:10Z) - Closing the Dequantization Gap: PixelCNN as a Single-Layer Flow [16.41460104376002]
We introduce subset flows, a class of flows that can transform finite volumes and allow exact computation of likelihoods for discrete data.
We identify ordinal discrete autoregressive models, including WaveNets, PixelCNNs and Transformers, as single-layer flows.
We demonstrate state-of-the-art results on CIFAR-10 for flow models trained with dequantization.
arXiv Detail & Related papers (2020-02-06T22:58:51Z) - Invertible Generative Modeling using Linear Rational Splines [11.510009152620666]
Normalizing flows attempt to model an arbitrary probability distribution through a set of invertible mappings.
The first flow designs used coupling layer mappings built upon affine transformations.
Intrepid piecewise functions as a replacement for affine transformations have attracted attention.
arXiv Detail & Related papers (2020-01-15T08:05:55Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.