Deep Data Density Estimation through Donsker-Varadhan Representation
- URL: http://arxiv.org/abs/2104.06612v1
- Date: Wed, 14 Apr 2021 03:38:32 GMT
- Title: Deep Data Density Estimation through Donsker-Varadhan Representation
- Authors: Seonho Park, Panos M. Pardalos
- Abstract summary: We present a simple yet effective method for estimating the data density using a deep neural network and the Donsker-Varadhan variational lower bound on the KL divergence.
We show that the optimal critic function associated with the Donsker-Varadhan representation on the divergence between the data and the uniform distribution can estimate the data density.
- Score: 5.276937617129594
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Estimating the data density is one of the challenging problems in deep
learning. In this paper, we present a simple yet effective method for
estimating the data density using a deep neural network and the
Donsker-Varadhan variational lower bound on the KL divergence. We show that the
optimal critic function associated with the Donsker-Varadhan representation on
the KL divergence between the data and the uniform distribution can estimate
the data density. We also present the deep neural network-based modeling and
its stochastic learning. The experimental results and possible applications of
the proposed method demonstrate that it is competitive with the previous
methods and has a lot of possibilities in applied to various applications.
Related papers
- Score Approximation, Estimation and Distribution Recovery of Diffusion
Models on Low-Dimensional Data [68.62134204367668]
This paper studies score approximation, estimation, and distribution recovery of diffusion models, when data are supported on an unknown low-dimensional linear subspace.
We show that with a properly chosen neural network architecture, the score function can be both accurately approximated and efficiently estimated.
The generated distribution based on the estimated score function captures the data geometric structures and converges to a close vicinity of the data distribution.
arXiv Detail & Related papers (2023-02-14T17:02:35Z) - Copula Density Neural Estimation [4.86067125387358]
We exploit the concept of copula to build an estimate of the probability density function associated to any observed data.
Results show that the novel learning approach is capable of modeling complex distributions and it can be applied for mutual information estimation and data generation.
arXiv Detail & Related papers (2022-11-25T10:53:27Z) - Invariance Learning in Deep Neural Networks with Differentiable Laplace
Approximations [76.82124752950148]
We develop a convenient gradient-based method for selecting the data augmentation.
We use a differentiable Kronecker-factored Laplace approximation to the marginal likelihood as our objective.
arXiv Detail & Related papers (2022-02-22T02:51:11Z) - Robust Learning via Ensemble Density Propagation in Deep Neural Networks [6.0122901245834015]
We formulate the problem of density propagation through layers of a deep neural network (DNN) and solve it using an Ensemble Density propagation scheme.
Experiments using MNIST and CIFAR-10 datasets show a significant improvement in the robustness of the trained models to random noise and adversarial attacks.
arXiv Detail & Related papers (2021-11-10T21:26:08Z) - Efficient training of lightweight neural networks using Online
Self-Acquired Knowledge Distillation [51.66271681532262]
Online Self-Acquired Knowledge Distillation (OSAKD) is proposed, aiming to improve the performance of any deep neural model in an online manner.
We utilize k-nn non-parametric density estimation technique for estimating the unknown probability distributions of the data samples in the output feature space.
arXiv Detail & Related papers (2021-08-26T14:01:04Z) - Influence Estimation and Maximization via Neural Mean-Field Dynamics [60.91291234832546]
We propose a novel learning framework using neural mean-field (NMF) dynamics for inference and estimation problems.
Our framework can simultaneously learn the structure of the diffusion network and the evolution of node infection probabilities.
arXiv Detail & Related papers (2021-06-03T00:02:05Z) - Network Diffusions via Neural Mean-Field Dynamics [52.091487866968286]
We propose a novel learning framework for inference and estimation problems of diffusion on networks.
Our framework is derived from the Mori-Zwanzig formalism to obtain an exact evolution of the node infection probabilities.
Our approach is versatile and robust to variations of the underlying diffusion network models.
arXiv Detail & Related papers (2020-06-16T18:45:20Z) - Roundtrip: A Deep Generative Neural Density Estimator [6.704101978104295]
We propose Roundtrip as a general-purpose neural density estimator based on deep generative models.
In a series of experiments, Roundtrip achieves state-of-the-art performance in a diverse range of density estimation tasks.
arXiv Detail & Related papers (2020-04-20T01:47:00Z) - Uncertainty Estimation Using a Single Deep Deterministic Neural Network [66.26231423824089]
We propose a method for training a deterministic deep model that can find and reject out of distribution data points at test time with a single forward pass.
We scale training in these with a novel loss function and centroid updating scheme and match the accuracy of softmax models.
arXiv Detail & Related papers (2020-03-04T12:27:36Z) - Learning Generative Models using Denoising Density Estimators [29.068491722778827]
We introduce a new generative model based on denoising density estimators (DDEs)
Our main contribution is a novel technique to obtain generative models by minimizing the KL-divergence directly.
Experimental results demonstrate substantial improvement in density estimation and competitive performance in generative model training.
arXiv Detail & Related papers (2020-01-08T20:30:40Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.