Copula Density Neural Estimation
- URL: http://arxiv.org/abs/2211.15353v1
- Date: Fri, 25 Nov 2022 10:53:27 GMT
- Title: Copula Density Neural Estimation
- Authors: Nunzio A. Letizia, Andrea M. Tonello
- Abstract summary: We exploit the concept of copula to build an estimate of the probability density function associated to any observed data.
Results show that the novel learning approach is capable of modeling complex distributions and it can be applied for mutual information estimation and data generation.
- Score: 4.86067125387358
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Probability density estimation from observed data constitutes a central task
in statistics. Recent advancements in machine learning offer new tools but also
pose new challenges. The big data era demands analysis of long-range spatial
and long-term temporal dependencies in large collections of raw data, rendering
neural networks an attractive solution for density estimation. In this paper,
we exploit the concept of copula to explicitly build an estimate of the
probability density function associated to any observed data. In particular, we
separate univariate marginal distributions from the joint dependence structure
in the data, the copula itself, and we model the latter with a neural
network-based method referred to as copula density neural estimation (CODINE).
Results show that the novel learning approach is capable of modeling complex
distributions and it can be applied for mutual information estimation and data
generation.
Related papers
- Towards Theoretical Understandings of Self-Consuming Generative Models [56.84592466204185]
This paper tackles the emerging challenge of training generative models within a self-consuming loop.
We construct a theoretical framework to rigorously evaluate how this training procedure impacts the data distributions learned by future models.
We present results for kernel density estimation, delivering nuanced insights such as the impact of mixed data training on error propagation.
arXiv Detail & Related papers (2024-02-19T02:08:09Z) - Tackling Computational Heterogeneity in FL: A Few Theoretical Insights [68.8204255655161]
We introduce and analyse a novel aggregation framework that allows for formalizing and tackling computational heterogeneous data.
Proposed aggregation algorithms are extensively analyzed from a theoretical, and an experimental prospective.
arXiv Detail & Related papers (2023-07-12T16:28:21Z) - Squared Neural Families: A New Class of Tractable Density Models [23.337256081314518]
We develop and investigate a new class of probability distributions, which we call a Squared Neural Family (SNEFY)
We show that SNEFYs admit closed form normalising constants in many cases of interest, thereby resulting in flexible yet fully tractable density models.
Their utility is illustrated on a variety of density estimation, conditional density estimation, and density estimation with missing data tasks.
arXiv Detail & Related papers (2023-05-22T23:56:11Z) - Score Approximation, Estimation and Distribution Recovery of Diffusion
Models on Low-Dimensional Data [68.62134204367668]
This paper studies score approximation, estimation, and distribution recovery of diffusion models, when data are supported on an unknown low-dimensional linear subspace.
We show that with a properly chosen neural network architecture, the score function can be both accurately approximated and efficiently estimated.
The generated distribution based on the estimated score function captures the data geometric structures and converges to a close vicinity of the data distribution.
arXiv Detail & Related papers (2023-02-14T17:02:35Z) - Learning Neural Causal Models with Active Interventions [83.44636110899742]
We introduce an active intervention-targeting mechanism which enables a quick identification of the underlying causal structure of the data-generating process.
Our method significantly reduces the required number of interactions compared with random intervention targeting.
We demonstrate superior performance on multiple benchmarks from simulated to real-world data.
arXiv Detail & Related papers (2021-09-06T13:10:37Z) - Featurized Density Ratio Estimation [82.40706152910292]
In our work, we propose to leverage an invertible generative model to map the two distributions into a common feature space prior to estimation.
This featurization brings the densities closer together in latent space, sidestepping pathological scenarios where the learned density ratios in input space can be arbitrarily inaccurate.
At the same time, the invertibility of our feature map guarantees that the ratios computed in feature space are equivalent to those in input space.
arXiv Detail & Related papers (2021-07-05T18:30:26Z) - Deep Data Density Estimation through Donsker-Varadhan Representation [5.276937617129594]
We present a simple yet effective method for estimating the data density using a deep neural network and the Donsker-Varadhan variational lower bound on the KL divergence.
We show that the optimal critic function associated with the Donsker-Varadhan representation on the divergence between the data and the uniform distribution can estimate the data density.
arXiv Detail & Related papers (2021-04-14T03:38:32Z) - Deep Archimedean Copulas [98.96141706464425]
ACNet is a novel differentiable neural network architecture that enforces structural properties.
We show that ACNet is able to both approximate common Archimedean Copulas and generate new copulas which may provide better fits to data.
arXiv Detail & Related papers (2020-12-05T22:58:37Z) - Semi-Structured Deep Piecewise Exponential Models [2.7728956081909346]
We propose a versatile framework for survival analysis that combines advanced concepts from statistics with deep learning.
A proof of concept is provided by using the framework to predict Alzheimer's disease progression.
arXiv Detail & Related papers (2020-11-11T14:41:19Z) - Neural Approximate Sufficient Statistics for Implicit Models [34.44047460667847]
We frame the task of constructing sufficient statistics as learning mutual information maximizing representations of the data with the help of deep neural networks.
We apply our approach to both traditional approximate Bayesian computation and recent neural likelihood methods, boosting their performance on a range of tasks.
arXiv Detail & Related papers (2020-10-20T07:11:40Z) - Latent Network Structure Learning from High Dimensional Multivariate
Point Processes [5.079425170410857]
We propose a new class of nonstationary Hawkes processes to characterize the complex processes underlying the observed data.
We estimate the latent network structure using an efficient sparse least squares estimation approach.
We demonstrate the efficacy of our proposed method through simulation studies and an application to a neuron spike train data set.
arXiv Detail & Related papers (2020-04-07T17:48:01Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.