Adaptive joint distribution learning
- URL: http://arxiv.org/abs/2110.04829v4
- Date: Wed, 10 Jan 2024 08:31:55 GMT
- Title: Adaptive joint distribution learning
- Authors: Damir Filipovic and Michael Multerer and Paul Schneider
- Abstract summary: We develop a new framework for embedding joint probability distributions in tensor product reproducing kernel Hilbert spaces (RKHS)
Our framework accommodates a low-dimensional, normalized and positive model of a Radon-Nikodym derivative, which we estimate from sample sizes of up to several million data points.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We develop a new framework for embedding joint probability distributions in
tensor product reproducing kernel Hilbert spaces (RKHS). Our framework
accommodates a low-dimensional, normalized and positive model of a
Radon-Nikodym derivative, which we estimate from sample sizes of up to several
million data points, alleviating the inherent limitations of RKHS modeling.
Well-defined normalized and positive conditional distributions are natural
by-products to our approach. The embedding is fast to compute and accommodates
learning problems ranging from prediction to classification. Our theoretical
findings are supplemented by favorable numerical results.
Related papers
- Kolmogorov-Smirnov GAN [52.36633001046723]
We propose a novel deep generative model, the Kolmogorov-Smirnov Generative Adversarial Network (KSGAN)
Unlike existing approaches, KSGAN formulates the learning process as a minimization of the Kolmogorov-Smirnov (KS) distance.
arXiv Detail & Related papers (2024-06-28T14:30:14Z) - Score-based generative models break the curse of dimensionality in
learning a family of sub-Gaussian probability distributions [5.801621787540268]
We introduce a notion of complexity for probability distributions in terms of their relative density with respect to the standard Gaussian measure.
We prove that if the log-relative density can be locally approximated by a neural network whose parameters can be suitably bounded, then the distribution generated by empirical score matching approximates the target distribution.
An essential ingredient of our proof is to derive a dimension-free deep neural network approximation rate for the true score function associated with the forward process.
arXiv Detail & Related papers (2024-02-12T22:02:23Z) - Chebyshev Particles [0.0]
We are first to consider the posterior distribution of the objective as a mapping of samples in an infinite-dimensional Euclidean space.
We propose a new criterion by maximizing the weighted Riesz polarization quantity, to discretize rectifiable submanifolds via pairwise interaction.
We have achieved high performance from the experiments for parameter inference in a linear state-space model with synthetic data and a non-linear volatility model with real-world data.
arXiv Detail & Related papers (2023-09-10T16:40:30Z) - General regularization in covariate shift adaptation [1.5469452301122175]
We show that the amount of samples needed to achieve the same order of accuracy as in the standard supervised learning without differences in data distributions is smaller than proven by state-of-the-art analyses.
arXiv Detail & Related papers (2023-07-21T11:19:00Z) - Score Approximation, Estimation and Distribution Recovery of Diffusion
Models on Low-Dimensional Data [68.62134204367668]
This paper studies score approximation, estimation, and distribution recovery of diffusion models, when data are supported on an unknown low-dimensional linear subspace.
We show that with a properly chosen neural network architecture, the score function can be both accurately approximated and efficiently estimated.
The generated distribution based on the estimated score function captures the data geometric structures and converges to a close vicinity of the data distribution.
arXiv Detail & Related papers (2023-02-14T17:02:35Z) - Wrapped Distributions on homogeneous Riemannian manifolds [58.720142291102135]
Control over distributions' properties, such as parameters, symmetry and modality yield a family of flexible distributions.
We empirically validate our approach by utilizing our proposed distributions within a variational autoencoder and a latent space network model.
arXiv Detail & Related papers (2022-04-20T21:25:21Z) - Robust Estimation for Nonparametric Families via Generative Adversarial
Networks [92.64483100338724]
We provide a framework for designing Generative Adversarial Networks (GANs) to solve high dimensional robust statistics problems.
Our work extend these to robust mean estimation, second moment estimation, and robust linear regression.
In terms of techniques, our proposed GAN losses can be viewed as a smoothed and generalized Kolmogorov-Smirnov distance.
arXiv Detail & Related papers (2022-02-02T20:11:33Z) - Optimal regularizations for data generation with probabilistic graphical
models [0.0]
Empirically, well-chosen regularization schemes dramatically improve the quality of the inferred models.
We consider the particular case of L 2 and L 1 regularizations in the Maximum A Posteriori (MAP) inference of generative pairwise graphical models.
arXiv Detail & Related papers (2021-12-02T14:45:16Z) - Information Theoretic Structured Generative Modeling [13.117829542251188]
A novel generative model framework called the structured generative model (SGM) is proposed that makes straightforward optimization possible.
The implementation employs a single neural network driven by an orthonormal input to a single white noise source adapted to learn an infinite Gaussian mixture model.
Preliminary results show that SGM significantly improves MINE estimation in terms of data efficiency and variance, conventional and variational Gaussian mixture models, as well as for training adversarial networks.
arXiv Detail & Related papers (2021-10-12T07:44:18Z) - Near-optimal inference in adaptive linear regression [60.08422051718195]
Even simple methods like least squares can exhibit non-normal behavior when data is collected in an adaptive manner.
We propose a family of online debiasing estimators to correct these distributional anomalies in at least squares estimation.
We demonstrate the usefulness of our theory via applications to multi-armed bandit, autoregressive time series estimation, and active learning with exploration.
arXiv Detail & Related papers (2021-07-05T21:05:11Z) - A Note on Optimizing Distributions using Kernel Mean Embeddings [94.96262888797257]
Kernel mean embeddings represent probability measures by their infinite-dimensional mean embeddings in a reproducing kernel Hilbert space.
We show that when the kernel is characteristic, distributions with a kernel sum-of-squares density are dense.
We provide algorithms to optimize such distributions in the finite-sample setting.
arXiv Detail & Related papers (2021-06-18T08:33:45Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.