Quasi-Bayesian sequential deconvolution
- URL: http://arxiv.org/abs/2408.14402v2
- Date: Fri, 13 Dec 2024 06:08:51 GMT
- Title: Quasi-Bayesian sequential deconvolution
- Authors: Stefano Favaro, Sandra Fortini,
- Abstract summary: We develop a principled sequential approach to estimate $f$ in a streaming or online domain.
Local and uniform Gaussian central limit theorems for $f_n$ are established, leading to credible intervals and bands for $f$.
An empirical validation of our methods is presented on synthetic and real data.
- Score: 7.10052009802944
- License:
- Abstract: Density deconvolution deals with the estimation of the probability density function $f$ of a random signal from $n\geq1$ data observed with independent and known additive random noise. This is a classical problem in statistics, for which frequentist and Bayesian nonparametric approaches are available to estimate $f$ in static or batch domains. In this paper, we consider the problem of density deconvolution in a streaming or online domain, and develop a principled sequential approach to estimate $f$. By relying on a quasi-Bayesian sequential (learning) model for the data, often referred to as Newton's algorithm, we obtain a sequential deconvolution estimate $f_{n}$ of $f$ that is of easy evaluation, computationally efficient, and with constant computational cost as data increase, which is desirable for streaming data. In particular, local and uniform Gaussian central limit theorems for $f_{n}$ are established, leading to asymptotic credible intervals and bands for $f$, respectively. We provide the sequential deconvolution estimate $f_{n}$ with large sample asymptotic guarantees under the quasi-Bayesian sequential model for the data, proving a merging with respect to the direct density estimation problem, and also under a ``true" frequentist model for the data, proving consistency. An empirical validation of our methods is presented on synthetic and real data, also comparing with respect to a kernel approach and a Bayesian nonparametric approach with a Dirichlet process mixture prior.
Related papers
- Parallel simulation for sampling under isoperimetry and score-based diffusion models [56.39904484784127]
As data size grows, reducing the iteration cost becomes an important goal.
Inspired by the success of the parallel simulation of the initial value problem in scientific computation, we propose parallel Picard methods for sampling tasks.
Our work highlights the potential advantages of simulation methods in scientific computation for dynamics-based sampling and diffusion models.
arXiv Detail & Related papers (2024-12-10T11:50:46Z) - Linear cost and exponentially convergent approximation of Gaussian Matérn processes [43.341057405337295]
computational cost for inference and prediction of statistical models based on Gaussian processes scales cubicly with the number of observations.
We develop a method with linear cost and with a covariance error that decreases exponentially fast in the order $m$ of the proposed approximation.
The method is based on an optimal rational approximation of the spectral density and results in an approximation that can be represented as a sum of $m$ independent Markov processes.
arXiv Detail & Related papers (2024-10-16T19:57:15Z) - O(d/T) Convergence Theory for Diffusion Probabilistic Models under Minimal Assumptions [6.76974373198208]
We establish a fast convergence theory for the denoising diffusion probabilistic model (DDPM) under minimal assumptions.
We show that the convergence rate improves to $O(k/T)$, where $k$ is the intrinsic dimension of the target data distribution.
This highlights the ability of DDPM to automatically adapt to unknown low-dimensional structures.
arXiv Detail & Related papers (2024-09-27T17:59:10Z) - Non-asymptotic bounds for forward processes in denoising diffusions: Ornstein-Uhlenbeck is hard to beat [49.1574468325115]
This paper presents explicit non-asymptotic bounds on the forward diffusion error in total variation (TV)
We parametrise multi-modal data distributions in terms of the distance $R$ to their furthest modes and consider forward diffusions with additive and multiplicative noise.
arXiv Detail & Related papers (2024-08-25T10:28:31Z) - Sobolev Space Regularised Pre Density Models [51.558848491038916]
We propose a new approach to non-parametric density estimation that is based on regularizing a Sobolev norm of the density.
This method is statistically consistent, and makes the inductive validation model clear and consistent.
arXiv Detail & Related papers (2023-07-25T18:47:53Z) - Towards Faster Non-Asymptotic Convergence for Diffusion-Based Generative
Models [49.81937966106691]
We develop a suite of non-asymptotic theory towards understanding the data generation process of diffusion models.
In contrast to prior works, our theory is developed based on an elementary yet versatile non-asymptotic approach.
arXiv Detail & Related papers (2023-06-15T16:30:08Z) - Statistical Inference with Stochastic Gradient Methods under
$\phi$-mixing Data [9.77185962310918]
We propose a mini-batch SGD estimator for statistical inference when the data is $phi$-mixing.
The confidence intervals are constructed using an associated mini-batch SGD procedure.
The proposed method is memory-efficient and easy to implement in practice.
arXiv Detail & Related papers (2023-02-24T16:16:43Z) - Score-based Continuous-time Discrete Diffusion Models [102.65769839899315]
We extend diffusion models to discrete variables by introducing a Markov jump process where the reverse process denoises via a continuous-time Markov chain.
We show that an unbiased estimator can be obtained via simple matching the conditional marginal distributions.
We demonstrate the effectiveness of the proposed method on a set of synthetic and real-world music and image benchmarks.
arXiv Detail & Related papers (2022-11-30T05:33:29Z) - Convergence for score-based generative modeling with polynomial
complexity [9.953088581242845]
We prove the first convergence guarantees for the core mechanic behind Score-based generative modeling.
Compared to previous works, we do not incur error that grows exponentially in time or that suffers from a curse of dimensionality.
We show that a predictor-corrector gives better convergence than using either portion alone.
arXiv Detail & Related papers (2022-06-13T14:57:35Z) - Heavy-tailed Streaming Statistical Estimation [58.70341336199497]
We consider the task of heavy-tailed statistical estimation given streaming $p$ samples.
We design a clipped gradient descent and provide an improved analysis under a more nuanced condition on the noise of gradients.
arXiv Detail & Related papers (2021-08-25T21:30:27Z) - Denoising Score Matching with Random Fourier Features [11.60130641443281]
We derive analytical expression for the Denoising Score matching using the Kernel Exponential Family as a model distribution.
The obtained expression explicitly depends on the noise variance, so the validation loss can be straightforwardly used to tune the noise level.
arXiv Detail & Related papers (2021-01-13T18:02:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.