Learning Non-linear Wavelet Transformation via Normalizing Flow
- URL: http://arxiv.org/abs/2101.11306v1
- Date: Wed, 27 Jan 2021 10:28:51 GMT
- Title: Learning Non-linear Wavelet Transformation via Normalizing Flow
- Authors: Shuo-Hui Li
- Abstract summary: An invertible transformation can be learned by a designed normalizing flow model.
With a factor-out scheme resembling the wavelet downsampling mechanism, one can train normalizing flow models to factor-out variables corresponding to fast patterns.
An analysis of the learned model in terms of low-pass/high-pass filters is given.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Wavelet transformation stands as a cornerstone in modern data analysis and
signal processing. Its mathematical essence is an invertible transformation
that discerns slow patterns from fast patterns in the frequency domain, which
repeats at each level. Such an invertible transformation can be learned by a
designed normalizing flow model. With a factor-out scheme resembling the
wavelet downsampling mechanism, a mutually independent prior, and parameter
sharing along the depth of the network, one can train normalizing flow models
to factor-out variables corresponding to fast patterns at different levels,
thus extending linear wavelet transformations to non-linear learnable models.
In this paper, a concrete way of building such flows is given. Then, a
demonstration of the model's ability in lossless compression task, progressive
loading, and super-resolution (upsampling) task. Lastly, an analysis of the
learned model in terms of low-pass/high-pass filters is given.
Related papers
- Boosting Fast and High-Quality Speech Synthesis with Linear Diffusion [85.54515118077825]
This paper proposes a linear diffusion model (LinDiff) based on an ordinary differential equation to simultaneously reach fast inference and high sample quality.
To reduce computational complexity, LinDiff employs a patch-based processing approach that partitions the input signal into small patches.
Our model can synthesize speech of a quality comparable to that of autoregressive models with faster synthesis speed.
arXiv Detail & Related papers (2023-06-09T07:02:43Z) - Transform Once: Efficient Operator Learning in Frequency Domain [69.74509540521397]
We study deep neural networks designed to harness the structure in frequency domain for efficient learning of long-range correlations in space or time.
This work introduces a blueprint for frequency domain learning through a single transform: transform once (T1)
arXiv Detail & Related papers (2022-11-26T01:56:05Z) - Fast Sampling of Diffusion Models via Operator Learning [74.37531458470086]
We use neural operators, an efficient method to solve the probability flow differential equations, to accelerate the sampling process of diffusion models.
Compared to other fast sampling methods that have a sequential nature, we are the first to propose a parallel decoding method.
We show our method achieves state-of-the-art FID of 3.78 for CIFAR-10 and 7.83 for ImageNet-64 in the one-model-evaluation setting.
arXiv Detail & Related papers (2022-11-24T07:30:27Z) - Discrete Denoising Flows [87.44537620217673]
We introduce a new discrete flow-based model for categorical random variables: Discrete Denoising Flows (DDFs)
In contrast with other discrete flow-based models, our model can be locally trained without introducing gradient bias.
We show that DDFs outperform Discrete Flows on modeling a toy example, binary MNIST and Cityscapes segmentation maps, measured in log-likelihood.
arXiv Detail & Related papers (2021-07-24T14:47:22Z) - Explainable nonlinear modelling of multiple time series with invertible
neural networks [7.605814048051735]
A method for nonlinear topology identification is proposed, based on the assumption that a collection of time series are generated in two steps.
The latter mappings are assumed invertible, and are modelled as shallow neural networks, so that their inverse can be numerically evaluated.
This paper explains the steps needed to calculate the gradients applying implicit differentiation.
arXiv Detail & Related papers (2021-07-01T12:07:09Z) - Wavelet Flow: Fast Training of High Resolution Normalizing Flows [27.661467862732792]
This paper introduces Wavelet Flow, a multi-scale, normalizing flow architecture based on wavelets.
A major advantage of Wavelet Flow is the ability to construct generative models for high resolution data that are impractical with previous models.
arXiv Detail & Related papers (2020-10-26T18:13:43Z) - Haar Wavelet based Block Autoregressive Flows for Trajectories [129.37479472754083]
Prediction of trajectories such as that of pedestrians is crucial to the performance of autonomous agents.
We introduce a novel Haar wavelet based block autoregressive model leveraging split couplings.
We illustrate the advantages of our approach for generating diverse and accurate trajectories on two real-world datasets.
arXiv Detail & Related papers (2020-09-21T13:57:10Z) - SurVAE Flows: Surjections to Bridge the Gap between VAEs and Flows [78.77808270452974]
SurVAE Flows is a modular framework for composable transformations that encompasses VAEs and normalizing flows.
We show that several recently proposed methods, including dequantization and augmented normalizing flows, can be expressed as SurVAE Flows.
arXiv Detail & Related papers (2020-07-06T13:13:22Z) - Closing the Dequantization Gap: PixelCNN as a Single-Layer Flow [16.41460104376002]
We introduce subset flows, a class of flows that can transform finite volumes and allow exact computation of likelihoods for discrete data.
We identify ordinal discrete autoregressive models, including WaveNets, PixelCNNs and Transformers, as single-layer flows.
We demonstrate state-of-the-art results on CIFAR-10 for flow models trained with dequantization.
arXiv Detail & Related papers (2020-02-06T22:58:51Z) - Invertible Generative Modeling using Linear Rational Splines [11.510009152620666]
Normalizing flows attempt to model an arbitrary probability distribution through a set of invertible mappings.
The first flow designs used coupling layer mappings built upon affine transformations.
Intrepid piecewise functions as a replacement for affine transformations have attracted attention.
arXiv Detail & Related papers (2020-01-15T08:05:55Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.