Diffusion Random Feature Model
- URL: http://arxiv.org/abs/2310.04417v2
- Date: Mon, 9 Oct 2023 01:12:09 GMT
- Title: Diffusion Random Feature Model
- Authors: Esha Saha and Giang Tran
- Abstract summary: We present a diffusion model-inspired deep random feature model that is interpretable.
We derive generalization bounds between the distribution of sampled data and the true distribution using properties of score matching.
We validate our findings by generating samples on the fashion MNIST dataset and instrumental audio data.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: Diffusion probabilistic models have been successfully used to generate data
from noise. However, most diffusion models are computationally expensive and
difficult to interpret with a lack of theoretical justification. Random feature
models on the other hand have gained popularity due to their interpretability
but their application to complex machine learning tasks remains limited. In
this work, we present a diffusion model-inspired deep random feature model that
is interpretable and gives comparable numerical results to a fully connected
neural network having the same number of trainable parameters. Specifically, we
extend existing results for random features and derive generalization bounds
between the distribution of sampled data and the true distribution using
properties of score matching. We validate our findings by generating samples on
the fashion MNIST dataset and instrumental audio data.
Related papers
- Theory on Score-Mismatched Diffusion Models and Zero-Shot Conditional Samplers [49.97755400231656]
We present the first performance guarantee with explicit dimensional general score-mismatched diffusion samplers.
We show that score mismatches result in an distributional bias between the target and sampling distributions, proportional to the accumulated mismatch between the target and training distributions.
This result can be directly applied to zero-shot conditional samplers for any conditional model, irrespective of measurement noise.
arXiv Detail & Related papers (2024-10-17T16:42:12Z) - Constrained Diffusion Models via Dual Training [80.03953599062365]
Diffusion processes are prone to generating samples that reflect biases in a training dataset.
We develop constrained diffusion models by imposing diffusion constraints based on desired distributions.
We show that our constrained diffusion models generate new data from a mixture data distribution that achieves the optimal trade-off among objective and constraints.
arXiv Detail & Related papers (2024-08-27T14:25:42Z) - Conditional Generative Models are Sufficient to Sample from Any Causal Effect Estimand [9.460857822923842]
Causal inference from observational data plays critical role in many applications in trustworthy machine learning.
We show how to sample from any identifiable interventional distribution given an arbitrary causal graph.
We also generate high-dimensional interventional samples from the MIMIC-CXR dataset involving text and image variables.
arXiv Detail & Related papers (2024-02-12T05:48:31Z) - Discrete Diffusion Modeling by Estimating the Ratios of the Data Distribution [67.9215891673174]
We propose score entropy as a novel loss that naturally extends score matching to discrete spaces.
We test our Score Entropy Discrete Diffusion models on standard language modeling tasks.
arXiv Detail & Related papers (2023-10-25T17:59:12Z) - Generative Diffusion From An Action Principle [0.0]
We show that score matching can be derived from an action principle, like the ones commonly used in physics.
We use this insight to demonstrate the connection between different classes of diffusion models.
arXiv Detail & Related papers (2023-10-06T18:00:00Z) - Bayesian Flow Networks [4.585102332532472]
This paper introduces Bayesian Flow Networks (BFNs), a new class of generative model in which the parameters of a set of independent distributions are modified with Bayesian inference.
Starting from a simple prior and iteratively updating the two distributions yields a generative procedure similar to the reverse process of diffusion models.
BFNs achieve competitive log-likelihoods for image modelling on dynamically binarized MNIST and CIFAR-10, and outperform all known discrete diffusion models on the text8 character-level language modelling task.
arXiv Detail & Related papers (2023-08-14T09:56:35Z) - Score Approximation, Estimation and Distribution Recovery of Diffusion
Models on Low-Dimensional Data [68.62134204367668]
This paper studies score approximation, estimation, and distribution recovery of diffusion models, when data are supported on an unknown low-dimensional linear subspace.
We show that with a properly chosen neural network architecture, the score function can be both accurately approximated and efficiently estimated.
The generated distribution based on the estimated score function captures the data geometric structures and converges to a close vicinity of the data distribution.
arXiv Detail & Related papers (2023-02-14T17:02:35Z) - Learning Multivariate CDFs and Copulas using Tensor Factorization [39.24470798045442]
Learning the multivariate distribution of data is a core challenge in statistics and machine learning.
In this work, we aim to learn multivariate cumulative distribution functions (CDFs), as they can handle mixed random variables.
We show that any grid sampled version of a joint CDF of mixed random variables admits a universal representation as a naive Bayes model.
We demonstrate the superior performance of the proposed model in several synthetic and real datasets and applications including regression, sampling and data imputation.
arXiv Detail & Related papers (2022-10-13T16:18:46Z) - Learning from aggregated data with a maximum entropy model [73.63512438583375]
We show how a new model, similar to a logistic regression, may be learned from aggregated data only by approximating the unobserved feature distribution with a maximum entropy hypothesis.
We present empirical evidence on several public datasets that the model learned this way can achieve performances comparable to those of a logistic model trained with the full unaggregated data.
arXiv Detail & Related papers (2022-10-05T09:17:27Z) - Sampling from Arbitrary Functions via PSD Models [55.41644538483948]
We take a two-step approach by first modeling the probability distribution and then sampling from that model.
We show that these models can approximate a large class of densities concisely using few evaluations, and present a simple algorithm to effectively sample from these models.
arXiv Detail & Related papers (2021-10-20T12:25:22Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.