Using Intermediate Forward Iterates for Intermediate Generator
Optimization
- URL: http://arxiv.org/abs/2302.02336v1
- Date: Sun, 5 Feb 2023 08:46:15 GMT
- Title: Using Intermediate Forward Iterates for Intermediate Generator
Optimization
- Authors: Harsh Mishra, Jurijs Nazarovs, Manmohan Dogra, Sathya N. Ravi
- Abstract summary: Intermediate Generator Optimization can be incorporated into any standard autoencoder pipeline for the generative task.
We show applications of the IGO on two dense predictive tasks viz., image extrapolation, and point cloud denoising.
- Score: 14.987013151525368
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Score-based models have recently been introduced as a richer framework to
model distributions in high dimensions and are generally more suitable for
generative tasks. In score-based models, a generative task is formulated using
a parametric model (such as a neural network) to directly learn the gradient of
such high dimensional distributions, instead of the density functions
themselves, as is done traditionally. From the mathematical point of view, such
gradient information can be utilized in reverse by stochastic sampling to
generate diverse samples. However, from a computational perspective, existing
score-based models can be efficiently trained only if the forward or the
corruption process can be computed in closed form. By using the relationship
between the process and layers in a feed-forward network, we derive a
backpropagation-based procedure which we call Intermediate Generator
Optimization to utilize intermediate iterates of the process with negligible
computational overhead. The main advantage of IGO is that it can be
incorporated into any standard autoencoder pipeline for the generative task. We
analyze the sample complexity properties of IGO to solve downstream tasks like
Generative PCA. We show applications of the IGO on two dense predictive tasks
viz., image extrapolation, and point cloud denoising. Our experiments indicate
that obtaining an ensemble of generators for various time points is possible
using first-order methods.
Related papers
- Diffusion-Model-Assisted Supervised Learning of Generative Models for
Density Estimation [10.793646707711442]
We present a framework for training generative models for density estimation.
We use the score-based diffusion model to generate labeled data.
Once the labeled data are generated, we can train a simple fully connected neural network to learn the generative model in the supervised manner.
arXiv Detail & Related papers (2023-10-22T23:56:19Z) - Diffusion Generative Flow Samplers: Improving learning signals through
partial trajectory optimization [87.21285093582446]
Diffusion Generative Flow Samplers (DGFS) is a sampling-based framework where the learning process can be tractably broken down into short partial trajectory segments.
Our method takes inspiration from the theory developed for generative flow networks (GFlowNets)
arXiv Detail & Related papers (2023-10-04T09:39:05Z) - Improving Out-of-Distribution Robustness of Classifiers via Generative
Interpolation [56.620403243640396]
Deep neural networks achieve superior performance for learning from independent and identically distributed (i.i.d.) data.
However, their performance deteriorates significantly when handling out-of-distribution (OoD) data.
We develop a simple yet effective method called Generative Interpolation to fuse generative models trained from multiple domains for synthesizing diverse OoD samples.
arXiv Detail & Related papers (2023-07-23T03:53:53Z) - Adversarial Likelihood Estimation With One-Way Flows [44.684952377918904]
Generative Adversarial Networks (GANs) can produce high-quality samples, but do not provide an estimate of the probability density around the samples.
We show that our method converges faster, produces comparable sample quality to GANs with similar architecture, successfully avoids over-fitting to commonly used datasets and produces smooth low-dimensional latent representations of the training data.
arXiv Detail & Related papers (2023-07-19T10:26:29Z) - Neural Inverse Transform Sampler [4.061135251278187]
We show that when modeling conditional densities with a neural network, $Z$ can be exactly and efficiently computed.
We introduce the textbfNeural Inverse Transform Sampler (NITS), a novel deep learning framework for modeling and sampling from general, multidimensional, compactly-supported probability densities.
arXiv Detail & Related papers (2022-06-22T15:28:29Z) - Convergence for score-based generative modeling with polynomial
complexity [9.953088581242845]
We prove the first convergence guarantees for the core mechanic behind Score-based generative modeling.
Compared to previous works, we do not incur error that grows exponentially in time or that suffers from a curse of dimensionality.
We show that a predictor-corrector gives better convergence than using either portion alone.
arXiv Detail & Related papers (2022-06-13T14:57:35Z) - Forward Operator Estimation in Generative Models with Kernel Transfer
Operators [37.999297683250575]
We show that our formulation enables highly efficient distribution approximation and sampling, and offers surprisingly good empirical performance.
We also show that the algorithm also performs well in small sample size settings (in brain imaging)
arXiv Detail & Related papers (2021-12-01T06:54:31Z) - Continual Learning with Fully Probabilistic Models [70.3497683558609]
We present an approach for continual learning based on fully probabilistic (or generative) models of machine learning.
We propose a pseudo-rehearsal approach using a Gaussian Mixture Model (GMM) instance for both generator and classifier functionalities.
We show that GMR achieves state-of-the-art performance on common class-incremental learning problems at very competitive time and memory complexity.
arXiv Detail & Related papers (2021-04-19T12:26:26Z) - Score-Based Generative Modeling through Stochastic Differential
Equations [114.39209003111723]
We present a differential equation that transforms a complex data distribution to a known prior distribution by injecting noise.
A corresponding reverse-time SDE transforms the prior distribution back into the data distribution by slowly removing the noise.
By leveraging advances in score-based generative modeling, we can accurately estimate these scores with neural networks.
We demonstrate high fidelity generation of 1024 x 1024 images for the first time from a score-based generative model.
arXiv Detail & Related papers (2020-11-26T19:39:10Z) - Goal-directed Generation of Discrete Structures with Conditional
Generative Models [85.51463588099556]
We introduce a novel approach to directly optimize a reinforcement learning objective, maximizing an expected reward.
We test our methodology on two tasks: generating molecules with user-defined properties and identifying short python expressions which evaluate to a given target value.
arXiv Detail & Related papers (2020-10-05T20:03:13Z) - Model Fusion via Optimal Transport [64.13185244219353]
We present a layer-wise model fusion algorithm for neural networks.
We show that this can successfully yield "one-shot" knowledge transfer between neural networks trained on heterogeneous non-i.i.d. data.
arXiv Detail & Related papers (2019-10-12T22:07:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.