$\infty$-Diff: Infinite Resolution Diffusion with Subsampled Mollified
States
- URL: http://arxiv.org/abs/2303.18242v2
- Date: Fri, 1 Mar 2024 16:28:36 GMT
- Title: $\infty$-Diff: Infinite Resolution Diffusion with Subsampled Mollified
States
- Authors: Sam Bond-Taylor, Chris G. Willcocks
- Abstract summary: $infty$-Diff is a generative diffusion model defined in an infinite-dimensional Hilbert space.
By training on randomly sampled subsets of coordinates, we learn a continuous function for arbitrary resolution sampling.
- Score: 13.75813166759549
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This paper introduces $\infty$-Diff, a generative diffusion model defined in
an infinite-dimensional Hilbert space, which can model infinite resolution
data. By training on randomly sampled subsets of coordinates and denoising
content only at those locations, we learn a continuous function for arbitrary
resolution sampling. Unlike prior neural field-based infinite-dimensional
models, which use point-wise functions requiring latent compression, our method
employs non-local integral operators to map between Hilbert spaces, allowing
spatial context aggregation. This is achieved with an efficient multi-scale
function-space architecture that operates directly on raw sparse coordinates,
coupled with a mollified diffusion process that smooths out irregularities.
Through experiments on high-resolution datasets, we found that even at an
$8\times$ subsampling rate, our model retains high-quality diffusion. This
leads to significant run-time and memory savings, delivers samples with lower
FID scores, and scales beyond the training resolution while retaining detail.
Related papers
- DiffuSeq-v2: Bridging Discrete and Continuous Text Spaces for
Accelerated Seq2Seq Diffusion Models [58.450152413700586]
We introduce a soft absorbing state that facilitates the diffusion model in learning to reconstruct discrete mutations based on the underlying Gaussian space.
We employ state-of-the-art ODE solvers within the continuous space to expedite the sampling process.
Our proposed method effectively accelerates the training convergence by 4x and generates samples of similar quality 800x faster.
arXiv Detail & Related papers (2023-10-09T15:29:10Z) - Samplet basis pursuit: Multiresolution scattered data approximation with sparsity constraints [0.0]
We consider scattered data approximation in samplet coordinates with $ell_1$-regularization.
By using the Riesz isometry, we embed samplets into reproducing kernel Hilbert spaces.
We argue that the class of signals that are sparse with respect to the embedded samplet basis is considerably larger than the class of signals that are sparse with respect to the basis of kernel translates.
arXiv Detail & Related papers (2023-06-16T21:20:49Z) - Random Smoothing Regularization in Kernel Gradient Descent Learning [24.383121157277007]
We present a framework for random smoothing regularization that can adaptively learn a wide range of ground truth functions belonging to the classical Sobolev spaces.
Our estimator can adapt to the structural assumptions of the underlying data and avoid the curse of dimensionality.
arXiv Detail & Related papers (2023-05-05T13:37:34Z) - Reflected Diffusion Models [93.26107023470979]
We present Reflected Diffusion Models, which reverse a reflected differential equation evolving on the support of the data.
Our approach learns the score function through a generalized score matching loss and extends key components of standard diffusion models.
arXiv Detail & Related papers (2023-04-10T17:54:38Z) - Score-based Diffusion Models in Function Space [140.792362459734]
Diffusion models have recently emerged as a powerful framework for generative modeling.
We introduce a mathematically rigorous framework called Denoising Diffusion Operators (DDOs) for training diffusion models in function space.
We show that the corresponding discretized algorithm generates accurate samples at a fixed cost independent of the data resolution.
arXiv Detail & Related papers (2023-02-14T23:50:53Z) - ManiFlow: Implicitly Representing Manifolds with Normalizing Flows [145.9820993054072]
Normalizing Flows (NFs) are flexible explicit generative models that have been shown to accurately model complex real-world data distributions.
We propose an optimization objective that recovers the most likely point on the manifold given a sample from the perturbed distribution.
Finally, we focus on 3D point clouds for which we utilize the explicit nature of NFs, i.e. surface normals extracted from the gradient of the log-likelihood and the log-likelihood itself.
arXiv Detail & Related papers (2022-08-18T16:07:59Z) - Super-resolution GANs of randomly-seeded fields [68.8204255655161]
We propose a novel super-resolution generative adversarial network (GAN) framework to estimate field quantities from random sparse sensors.
The algorithm exploits random sampling to provide incomplete views of the high-resolution underlying distributions.
The proposed technique is tested on synthetic databases of fluid flow simulations, ocean surface temperature distributions measurements, and particle image velocimetry data.
arXiv Detail & Related papers (2022-02-23T18:57:53Z) - Multi-fidelity data fusion for the approximation of scalar functions
with low intrinsic dimensionality through active subspaces [0.0]
We present a multi-fidelity approach involving active subspaces and we test it on two different high-dimensional benchmarks.
In this work we present a multi-fidelity approach involving active subspaces and we test it on two different high-dimensional benchmarks.
arXiv Detail & Related papers (2020-10-16T12:35:49Z) - Spatially Adaptive Inference with Stochastic Feature Sampling and
Interpolation [72.40827239394565]
We propose to compute features only at sparsely sampled locations.
We then densely reconstruct the feature map with an efficient procedure.
The presented network is experimentally shown to save substantial computation while maintaining accuracy over a variety of computer vision tasks.
arXiv Detail & Related papers (2020-03-19T15:36:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.