Deconvolutional Density Network: Free-Form Conditional Density
Estimation
- URL: http://arxiv.org/abs/2105.14367v1
- Date: Sat, 29 May 2021 20:09:25 GMT
- Title: Deconvolutional Density Network: Free-Form Conditional Density
Estimation
- Authors: Bing Chen, Mazharul Islam, Lin Wang, Jisuo Gao and Jeff Orchard
- Abstract summary: A neural network can be used to compute the output distribution explicitly.
We show the benefits of modeling free-form distributions using deconvolution.
We compare our method to a number of other density-estimation approaches.
- Score: 6.805003206706124
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Conditional density estimation is the task of estimating the probability of
an event, conditioned on some inputs. A neural network can be used to compute
the output distribution explicitly. For such a task, there are many ways to
represent a continuous-domain distribution using the output of a neural
network, but each comes with its own limitations for what distributions it can
accurately render. If the family of functions is too restrictive, it will not
be appropriate for many datasets. In this paper, we demonstrate the benefits of
modeling free-form distributions using deconvolution. It has the advantage of
being flexible, but also takes advantage of the topological smoothness offered
by the deconvolution layers. We compare our method to a number of other
density-estimation approaches, and show that our Deconvolutional Density
Network (DDN) outperforms the competing methods on many artificial and real
tasks, without committing to a restrictive parametric model.
Related papers
- CF-GO-Net: A Universal Distribution Learner via Characteristic Function Networks with Graph Optimizers [8.816637789605174]
We introduce an approach which employs the characteristic function (CF), a probabilistic descriptor that directly corresponds to the distribution.
Unlike the probability density function (pdf), the characteristic function not only always exists, but also provides an additional degree of freedom.
Our method allows the use of a pre-trained model, such as a well-trained autoencoder, and is capable of learning directly in its feature space.
arXiv Detail & Related papers (2024-09-19T09:33:12Z) - Generative Conditional Distributions by Neural (Entropic) Optimal Transport [12.152228552335798]
We introduce a novel neural entropic optimal transport method designed to learn generative models of conditional distributions.
Our method relies on the minimax training of two neural networks.
Our experiments on real-world datasets show the effectiveness of our algorithm compared to state-of-the-art conditional distribution learning techniques.
arXiv Detail & Related papers (2024-06-04T13:45:35Z) - Squared Neural Families: A New Class of Tractable Density Models [23.337256081314518]
We develop and investigate a new class of probability distributions, which we call a Squared Neural Family (SNEFY)
We show that SNEFYs admit closed form normalising constants in many cases of interest, thereby resulting in flexible yet fully tractable density models.
Their utility is illustrated on a variety of density estimation, conditional density estimation, and density estimation with missing data tasks.
arXiv Detail & Related papers (2023-05-22T23:56:11Z) - Uncertainty Modeling for Out-of-Distribution Generalization [56.957731893992495]
We argue that the feature statistics can be properly manipulated to improve the generalization ability of deep learning models.
Common methods often consider the feature statistics as deterministic values measured from the learned features.
We improve the network generalization ability by modeling the uncertainty of domain shifts with synthesized feature statistics during training.
arXiv Detail & Related papers (2022-02-08T16:09:12Z) - NUQ: Nonparametric Uncertainty Quantification for Deterministic Neural
Networks [151.03112356092575]
We show the principled way to measure the uncertainty of predictions for a classifier based on Nadaraya-Watson's nonparametric estimate of the conditional label distribution.
We demonstrate the strong performance of the method in uncertainty estimation tasks on a variety of real-world image datasets.
arXiv Detail & Related papers (2022-02-07T12:30:45Z) - Adversarial sampling of unknown and high-dimensional conditional
distributions [0.0]
In this paper the sampling method, as well as the inference of the underlying distribution, are handled with a data-driven method known as generative adversarial networks (GAN)
GAN trains two competing neural networks to produce a network that can effectively generate samples from the training set distribution.
It is shown that all the versions of the proposed algorithm effectively sample the target conditional distribution with minimal impact on the quality of the samples.
arXiv Detail & Related papers (2021-11-08T12:23:38Z) - Distributionally Robust Semi-Supervised Learning Over Graphs [68.29280230284712]
Semi-supervised learning (SSL) over graph-structured data emerges in many network science applications.
To efficiently manage learning over graphs, variants of graph neural networks (GNNs) have been developed recently.
Despite their success in practice, most of existing methods are unable to handle graphs with uncertain nodal attributes.
Challenges also arise due to distributional uncertainties associated with data acquired by noisy measurements.
A distributionally robust learning framework is developed, where the objective is to train models that exhibit quantifiable robustness against perturbations.
arXiv Detail & Related papers (2021-10-20T14:23:54Z) - Kalman Bayesian Neural Networks for Closed-form Online Learning [5.220940151628734]
We propose a novel approach for BNN learning via closed-form Bayesian inference.
The calculation of the predictive distribution of the output and the update of the weight distribution are treated as Bayesian filtering and smoothing problems.
This allows closed-form expressions for training the network's parameters in a sequential/online fashion without gradient descent.
arXiv Detail & Related papers (2021-10-03T07:29:57Z) - Probabilistic Numeric Convolutional Neural Networks [80.42120128330411]
Continuous input signals like images and time series that are irregularly sampled or have missing values are challenging for existing deep learning methods.
We propose Probabilistic Convolutional Neural Networks which represent features as Gaussian processes (GPs)
We then define a convolutional layer as the evolution of a PDE defined on this GP, followed by a nonlinearity.
In experiments we show that our approach yields a $3times$ reduction of error from the previous state of the art on the SuperPixel-MNIST dataset and competitive performance on the medical time2012 dataset PhysioNet.
arXiv Detail & Related papers (2020-10-21T10:08:21Z) - Diversity inducing Information Bottleneck in Model Ensembles [73.80615604822435]
In this paper, we target the problem of generating effective ensembles of neural networks by encouraging diversity in prediction.
We explicitly optimize a diversity inducing adversarial loss for learning latent variables and thereby obtain diversity in the output predictions necessary for modeling multi-modal data.
Compared to the most competitive baselines, we show significant improvements in classification accuracy, under a shift in the data distribution.
arXiv Detail & Related papers (2020-03-10T03:10:41Z) - Stein Variational Inference for Discrete Distributions [70.19352762933259]
We propose a simple yet general framework that transforms discrete distributions to equivalent piecewise continuous distributions.
Our method outperforms traditional algorithms such as Gibbs sampling and discontinuous Hamiltonian Monte Carlo.
We demonstrate that our method provides a promising tool for learning ensembles of binarized neural network (BNN)
In addition, such transform can be straightforwardly employed in gradient-free kernelized Stein discrepancy to perform goodness-of-fit (GOF) test on discrete distributions.
arXiv Detail & Related papers (2020-03-01T22:45:41Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.