Parallel Backpropagation for Inverse of a Convolution with Application to Normalizing Flows
- URL: http://arxiv.org/abs/2410.14634v1
- Date: Fri, 18 Oct 2024 17:35:33 GMT
- Title: Parallel Backpropagation for Inverse of a Convolution with Application to Normalizing Flows
- Authors: Sandeep Nagar, Girish Varma,
- Abstract summary: Inverse of an invertible convolution is an important operation that comes up in Normalizing Flows.
We give a fast parallel backpropagation algorithm with running time $O(sqrtn)$ for a square image.
We show significantly improved sampling times with similar bits per dimension compared to previous models.
- Score: 2.048226951354646
- License:
- Abstract: Inverse of an invertible convolution is an important operation that comes up in Normalizing Flows, Image Deblurring, etc. The naive algorithm for backpropagation of this operation using Gaussian elimination has running time $O(n^3)$ where $n$ is the number of pixels in the image. We give a fast parallel backpropagation algorithm with running time $O(\sqrt{n})$ for a square image and provide a GPU implementation of the same. Inverse Convolutions are usually used in Normalizing Flows in the sampling pass, making them slow. We propose to use Inverse Convolutions in the forward (image to latent vector) pass of the Normalizing flow. Since the sampling pass is the inverse of the forward pass, it will use convolutions only, resulting in efficient sampling times. We use our parallel backpropagation algorithm for optimizing the inverse convolution layer resulting in fast training times also. We implement this approach in various Normalizing Flow backbones, resulting in our Inverse-Flow models. We benchmark Inverse-Flow on standard datasets and show significantly improved sampling times with similar bits per dimension compared to previous models.
Related papers
- Fast Samplers for Inverse Problems in Iterative Refinement Models [19.099632445326826]
We propose a plug-and-play framework for constructing efficient samplers for inverse problems.
Our method can generate high-quality samples in as few as 5 conditional sampling steps and outperforms competing baselines requiring 20-1000 steps.
arXiv Detail & Related papers (2024-05-27T21:50:16Z) - Accelerating Parallel Sampling of Diffusion Models [25.347710690711562]
We propose a novel approach that accelerates the sampling of diffusion models by parallelizing the autoregressive process.
Applying these techniques, we introduce ParaTAA, a universal and training-free parallel sampling algorithm.
Our experiments demonstrate that ParaTAA can decrease the inference steps required by common sequential sampling algorithms by a factor of 4$sim$14 times.
arXiv Detail & Related papers (2024-02-15T14:27:58Z) - FInC Flow: Fast and Invertible $k \times k$ Convolutions for Normalizing
Flows [2.156373334386171]
Invertible convolutions have been an essential element for building expressive normalizing flow-based generative models.
We propose a $k times k$ convolutional layer and Deep Normalizing Flow architecture.
arXiv Detail & Related papers (2023-01-23T04:31:03Z) - Fast Sampling of Diffusion Models via Operator Learning [74.37531458470086]
We use neural operators, an efficient method to solve the probability flow differential equations, to accelerate the sampling process of diffusion models.
Compared to other fast sampling methods that have a sequential nature, we are the first to propose a parallel decoding method.
We show our method achieves state-of-the-art FID of 3.78 for CIFAR-10 and 7.83 for ImageNet-64 in the one-model-evaluation setting.
arXiv Detail & Related papers (2022-11-24T07:30:27Z) - Nesterov Accelerated ADMM for Fast Diffeomorphic Image Registration [63.15453821022452]
Recent developments in approaches based on deep learning have achieved sub-second runtimes for DiffIR.
We propose a simple iterative scheme that functionally composes intermediate non-stationary velocity fields.
We then propose a convex optimisation model that uses a regularisation term of arbitrary order to impose smoothness on these velocity fields.
arXiv Detail & Related papers (2021-09-26T19:56:45Z) - Stochastic Optimization with Laggard Data Pipelines [65.20044914532221]
We show that "dataechoed" extensions of common optimization methods exhibit provable improvements over their synchronous counterparts.
Specifically, we show that in convex optimization with minibatches, data echoing affords speedups on the curvature-dominated part of the convergence rate, while maintaining the optimal statistical rate.
arXiv Detail & Related papers (2020-10-26T14:55:31Z) - Accelerated FBP for computed tomography image reconstruction [1.0266928164137636]
Filtered back projection (FBP) is a commonly used technique in tomographic image reconstruction demonstrating acceptable quality.
We propose a novel approach that reduces the computational complexity of the algorithm to $Theta(N2log N)$ addition operations avoiding Fourier space.
arXiv Detail & Related papers (2020-07-13T10:16:54Z) - Adversarial Optimal Transport Through The Convolution Of Kernels With
Evolving Measures [3.1735221946062313]
A novel algorithm is proposed to solve the sample-based optimal transport problem.
The representation of the test function as the Monte Carlo simulation of a distribution makes the algorithm robust to dimensionality.
arXiv Detail & Related papers (2020-06-07T19:42:50Z) - Non-Adaptive Adaptive Sampling on Turnstile Streams [57.619901304728366]
We give the first relative-error algorithms for column subset selection, subspace approximation, projective clustering, and volume on turnstile streams that use space sublinear in $n$.
Our adaptive sampling procedure has a number of applications to various data summarization problems that either improve state-of-the-art or have only been previously studied in the more relaxed row-arrival model.
arXiv Detail & Related papers (2020-04-23T05:00:21Z) - Gaussianization Flows [113.79542218282282]
We propose a new type of normalizing flow model that enables both efficient iteration of likelihoods and efficient inversion for sample generation.
Because of this guaranteed expressivity, they can capture multimodal target distributions without compromising the efficiency of sample generation.
arXiv Detail & Related papers (2020-03-04T08:15:06Z) - Accelerating Feedforward Computation via Parallel Nonlinear Equation
Solving [106.63673243937492]
Feedforward computation, such as evaluating a neural network or sampling from an autoregressive model, is ubiquitous in machine learning.
We frame the task of feedforward computation as solving a system of nonlinear equations. We then propose to find the solution using a Jacobi or Gauss-Seidel fixed-point method, as well as hybrid methods of both.
Our method is guaranteed to give exactly the same values as the original feedforward computation with a reduced (or equal) number of parallelizable iterations, and hence reduced time given sufficient parallel computing power.
arXiv Detail & Related papers (2020-02-10T10:11:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.