Generative Archimedean Copulas
- URL: http://arxiv.org/abs/2102.11351v2
- Date: Wed, 24 Feb 2021 02:53:11 GMT
- Title: Generative Archimedean Copulas
- Authors: Yuting Ng, Ali Hasan, Khalil Elkhalil, Vahid Tarokh
- Abstract summary: We propose a new generative modeling technique for learning multidimensional cumulative distribution functions (CDFs) in the form of copulas.
We consider certain classes of copulas known as Archimedean and hierarchical Archimedean copulas, popular for their parsimonious representation and ability to model different tail dependencies.
- Score: 27.705956325584026
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We propose a new generative modeling technique for learning multidimensional
cumulative distribution functions (CDFs) in the form of copulas. Specifically,
we consider certain classes of copulas known as Archimedean and hierarchical
Archimedean copulas, popular for their parsimonious representation and ability
to model different tail dependencies. We consider their representation as
mixture models with Laplace transforms of latent random variables from
generative neural networks. This alternative representation allows for easy
sampling and computational efficiencies especially in high dimensions. We
additionally describe multiple methods for optimizing the model parameters.
Finally, we present empirical results that demonstrate the efficacy of our
proposed method in learning multidimensional CDFs and its computational
efficiency compared to existing methods.
Related papers
- Influence Functions for Scalable Data Attribution in Diffusion Models [52.92223039302037]
Diffusion models have led to significant advancements in generative modelling.
Yet their widespread adoption poses challenges regarding data attribution and interpretability.
In this paper, we aim to help address such challenges by developing an textitinfluence functions framework.
arXiv Detail & Related papers (2024-10-17T17:59:02Z) - Proximal Interacting Particle Langevin Algorithms [0.0]
We introduce Proximal Interacting Particle Langevin Algorithms (PIPLA) for inference and learning in latent variable models.
We propose several variants within the novel proximal IPLA family, tailored to the problem of estimating parameters in a non-differentiable statistical model.
Our theory and experiments together show that PIPLA family can be the de facto choice for parameter estimation problems in latent variable models for non-differentiable models.
arXiv Detail & Related papers (2024-06-20T13:16:41Z) - Sample Complexity Characterization for Linear Contextual MDPs [67.79455646673762]
Contextual decision processes (CMDPs) describe a class of reinforcement learning problems in which the transition kernels and reward functions can change over time with different MDPs indexed by a context variable.
CMDPs serve as an important framework to model many real-world applications with time-varying environments.
We study CMDPs under two linear function approximation models: Model I with context-varying representations and common linear weights for all contexts; and Model II with common representations for all contexts and context-varying linear weights.
arXiv Detail & Related papers (2024-02-05T03:25:04Z) - Heterogeneous Multi-Task Gaussian Cox Processes [61.67344039414193]
We present a novel extension of multi-task Gaussian Cox processes for modeling heterogeneous correlated tasks jointly.
A MOGP prior over the parameters of the dedicated likelihoods for classification, regression and point process tasks can facilitate sharing of information between heterogeneous tasks.
We derive a mean-field approximation to realize closed-form iterative updates for estimating model parameters.
arXiv Detail & Related papers (2023-08-29T15:01:01Z) - Geometric Neural Diffusion Processes [55.891428654434634]
We extend the framework of diffusion models to incorporate a series of geometric priors in infinite-dimension modelling.
We show that with these conditions, the generative functional model admits the same symmetry.
arXiv Detail & Related papers (2023-07-11T16:51:38Z) - Federated Variational Inference Methods for Structured Latent Variable
Models [1.0312968200748118]
Federated learning methods enable model training across distributed data sources without data leaving their original locations.
We present a general and elegant solution based on structured variational inference, widely used in Bayesian machine learning.
We also provide a communication-efficient variant analogous to the canonical FedAvg algorithm.
arXiv Detail & Related papers (2023-02-07T08:35:04Z) - Spectral Diffusion Processes [26.510979162244304]
Score-based generative modelling (SGM) has proven to be a very effective method for modelling densities on finite-dimensional spaces.
We represent functional data in spectral space to dissociate part of the processes from their space-time part.
arXiv Detail & Related papers (2022-09-28T14:23:41Z) - Learning High-Dimensional Distributions with Latent Neural Fokker-Planck
Kernels [67.81799703916563]
We introduce new techniques to formulate the problem as solving Fokker-Planck equation in a lower-dimensional latent space.
Our proposed model consists of latent-distribution morphing, a generator and a parameterized Fokker-Planck kernel function.
arXiv Detail & Related papers (2021-05-10T17:42:01Z) - Deep Extreme Value Copulas for Estimation and Sampling [35.93835819721815]
We propose a new method for modeling the distribution function of high dimensional extreme value distributions.
We present new methods for recovering the spectral representation of extreme distributions and propose a generative model for sampling from extreme copulas.
arXiv Detail & Related papers (2021-02-17T22:02:47Z) - Improving the Reconstruction of Disentangled Representation Learners via Multi-Stage Modeling [54.94763543386523]
Current autoencoder-based disentangled representation learning methods achieve disentanglement by penalizing the ( aggregate) posterior to encourage statistical independence of the latent factors.
We present a novel multi-stage modeling approach where the disentangled factors are first learned using a penalty-based disentangled representation learning method.
Then, the low-quality reconstruction is improved with another deep generative model that is trained to model the missing correlated latent variables.
arXiv Detail & Related papers (2020-10-25T18:51:15Z) - Analysis of Bayesian Inference Algorithms by the Dynamical Functional
Approach [2.8021833233819486]
We analyze an algorithm for approximate inference with large Gaussian latent variable models in a student-trivial scenario.
For the case of perfect data-model matching, the knowledge of static order parameters derived from the replica method allows us to obtain efficient algorithmic updates.
arXiv Detail & Related papers (2020-01-14T17:22:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.