Continuous-Time Functional Diffusion Processes
- URL: http://arxiv.org/abs/2303.00800v3
- Date: Mon, 18 Dec 2023 11:24:24 GMT
- Title: Continuous-Time Functional Diffusion Processes
- Authors: Giulio Franzese, Giulio Corallo, Simone Rossi, Markus Heinonen,
Maurizio Filippone, Pietro Michiardi
- Abstract summary: We introduce Functional Diffusion Processes (FDPs), which generalize score-based diffusion models to infinite-dimensional function spaces.
FDPs require a new framework to describe the forward and backward dynamics, and several extensions to derive practical training objectives.
- Score: 24.31376730733132
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We introduce Functional Diffusion Processes (FDPs), which generalize
score-based diffusion models to infinite-dimensional function spaces. FDPs
require a new mathematical framework to describe the forward and backward
dynamics, and several extensions to derive practical training objectives. These
include infinite-dimensional versions of Girsanov theorem, in order to be able
to compute an ELBO, and of the sampling theorem, in order to guarantee that
functional evaluations in a countable set of points are equivalent to
infinite-dimensional functions. We use FDPs to build a new breed of generative
models in function spaces, which do not require specialized network
architectures, and that can work with any kind of continuous data. Our results
on real data show that FDPs achieve high-quality image generation, using a
simple MLP architecture with orders of magnitude fewer parameters than existing
diffusion models.
Related papers
- Extension of Symmetrized Neural Network Operators with Fractional and Mixed Activation Functions [0.0]
We propose a novel extension to symmetrized neural network operators by incorporating fractional and mixed activation functions.
Our framework introduces a fractional exponent in the activation functions, allowing adaptive non-linear approximations with improved accuracy.
arXiv Detail & Related papers (2025-01-17T14:24:25Z) - DimOL: Dimensional Awareness as A New 'Dimension' in Operator Learning [63.5925701087252]
We introduce DimOL (Dimension-aware Operator Learning), drawing insights from dimensional analysis.
To implement DimOL, we propose the ProdLayer, which can be seamlessly integrated into FNO-based and Transformer-based PDE solvers.
Empirically, DimOL models achieve up to 48% performance gain within the PDE datasets.
arXiv Detail & Related papers (2024-10-08T10:48:50Z) - Functional Flow Matching [14.583771853250008]
We propose a function-space generative model that generalizes the recently-introduced Flow Matching model.
Our method does not rely on likelihoods or simulations, making it well-suited to the function space setting.
We demonstrate through experiments on several real-world benchmarks that our proposed FFM method outperforms several recently proposed function-space generative models.
arXiv Detail & Related papers (2023-05-26T19:07:47Z) - Score-based Diffusion Models in Function Space [137.70916238028306]
Diffusion models have recently emerged as a powerful framework for generative modeling.
This work introduces a mathematically rigorous framework called Denoising Diffusion Operators (DDOs) for training diffusion models in function space.
We show that the corresponding discretized algorithm generates accurate samples at a fixed cost independent of the data resolution.
arXiv Detail & Related papers (2023-02-14T23:50:53Z) - Learning PSD-valued functions using kernel sums-of-squares [94.96262888797257]
We introduce a kernel sum-of-squares model for functions that take values in the PSD cone.
We show that it constitutes a universal approximator of PSD functions, and derive eigenvalue bounds in the case of subsampled equality constraints.
We then apply our results to modeling convex functions, by enforcing a kernel sum-of-squares representation of their Hessian.
arXiv Detail & Related papers (2021-11-22T16:07:50Z) - Modern Non-Linear Function-on-Function Regression [8.231050911072755]
We introduce a new class of non-linear function-on-function regression models for functional data using neural networks.
We give two model fitting strategies, Functional Direct Neural Network (FDNN) and Functional Basis Neural Network (FBNN)
arXiv Detail & Related papers (2021-07-29T16:19:59Z) - Closed-form Continuous-Depth Models [99.40335716948101]
Continuous-depth neural models rely on advanced numerical differential equation solvers.
We present a new family of models, termed Closed-form Continuous-depth (CfC) networks, that are simple to describe and at least one order of magnitude faster.
arXiv Detail & Related papers (2021-06-25T22:08:51Z) - Compressing Deep ODE-Nets using Basis Function Expansions [105.05435207079759]
We consider formulations of the weights as continuous-depth functions using linear combinations of basis functions.
This perspective allows us to compress the weights through a change of basis, without retraining, while maintaining near state-of-the-art performance.
In turn, both inference time and the memory footprint are reduced, enabling quick and rigorous adaptation between computational environments.
arXiv Detail & Related papers (2021-06-21T03:04:51Z) - High-dimensional Functional Graphical Model Structure Learning via
Neighborhood Selection Approach [15.334392442475115]
We propose a neighborhood selection approach to estimate the structure of functional graphical models.
We thus circumvent the need for a well-defined precision operator that may not exist when the functions are infinite dimensional.
arXiv Detail & Related papers (2021-05-06T07:38:50Z) - Learning Sub-Patterns in Piecewise Continuous Functions [4.18804572788063]
Most gradient descent algorithms can optimize neural networks that are sub-differentiable in their parameters.
This paper focuses on the case where the discontinuities arise from distinct sub-patterns.
We propose a new discontinuous deep neural network model trainable via a decoupled two-step procedure.
arXiv Detail & Related papers (2020-10-29T13:44:13Z) - UNIPoint: Universally Approximating Point Processes Intensities [125.08205865536577]
We provide a proof that a class of learnable functions can universally approximate any valid intensity function.
We implement UNIPoint, a novel neural point process model, using recurrent neural networks to parameterise sums of basis function upon each event.
arXiv Detail & Related papers (2020-07-28T09:31:56Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.