Vine Copulas as Differentiable Computational Graphs
- URL: http://arxiv.org/abs/2506.13318v1
- Date: Mon, 16 Jun 2025 09:57:36 GMT
- Title: Vine Copulas as Differentiable Computational Graphs
- Authors: Tuoyuan Cheng, Thibault Vatter, Thomas Nagler, Kan Chen,
- Abstract summary: We introduce the vine computational graph, a DAG that abstracts the multilevel vine structure and associated computations.<n>We devise new algorithms for conditional sampling, efficient sampling-order scheduling, and constructing vine structures for customized conditioning variables.<n>We implement these ideas in torchvinecopulib, a GPU-accelerated Python library built upon PyTorch, delivering improved scalability for fitting, sampling, and density evaluation.
- Score: 7.87903514809639
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Vine copulas are sophisticated models for multivariate distributions and are increasingly used in machine learning. To facilitate their integration into modern ML pipelines, we introduce the vine computational graph, a DAG that abstracts the multilevel vine structure and associated computations. On this foundation, we devise new algorithms for conditional sampling, efficient sampling-order scheduling, and constructing vine structures for customized conditioning variables. We implement these ideas in torchvinecopulib, a GPU-accelerated Python library built upon PyTorch, delivering improved scalability for fitting, sampling, and density evaluation. Our experiments illustrate how gradient flowing through the vine can improve Vine Copula Autoencoders and that incorporating vines for uncertainty quantification in deep learning can outperform MC-dropout, deep ensembles, and Bayesian Neural Networks in sharpness, calibration, and runtime. By recasting vine copula models as computational graphs, our work connects classical dependence modeling with modern deep-learning toolchains and facilitates the integration of state-of-the-art copula methods in modern machine learning pipelines.
Related papers
- Tensor Decompositions Meet Control Theory: Learning General Mixtures of
Linear Dynamical Systems [19.47235707806519]
We give a new approach to learning mixtures of linear dynamical systems based on tensor decompositions.
Our algorithm succeeds without strong separation conditions on the components, and can be used to compete with the Bayes optimal clustering of the trajectories.
arXiv Detail & Related papers (2023-07-13T03:00:01Z) - GloptiNets: Scalable Non-Convex Optimization with Certificates [61.50835040805378]
We present a novel approach to non-cube optimization with certificates, which handles smooth functions on the hypercube or on the torus.
By exploiting the regularity of the target function intrinsic in the decay of its spectrum, we allow at the same time to obtain precise certificates and leverage the advanced and powerful neural networks.
arXiv Detail & Related papers (2023-06-26T09:42:59Z) - NIO: Lightweight neural operator-based architecture for video frame
interpolation [15.875579519177487]
NIO is a lightweight, efficient neural operator-based architecture to perform video frame-by-frame learning.
We show that NIO can produce visually-smooth and accurate results and converges in fewer epochs than state-of-the-art approaches.
arXiv Detail & Related papers (2022-11-19T20:30:47Z) - Learning with MISELBO: The Mixture Cookbook [62.75516608080322]
We present the first ever mixture of variational approximations for a normalizing flow-based hierarchical variational autoencoder (VAE) with VampPrior and a PixelCNN decoder network.
We explain this cooperative behavior by drawing a novel connection between VI and adaptive importance sampling.
We obtain state-of-the-art results among VAE architectures in terms of negative log-likelihood on the MNIST and FashionMNIST datasets.
arXiv Detail & Related papers (2022-09-30T15:01:35Z) - Convolutional Learning on Multigraphs [153.20329791008095]
We develop convolutional information processing on multigraphs and introduce convolutional multigraph neural networks (MGNNs)
To capture the complex dynamics of information diffusion within and across each of the multigraph's classes of edges, we formalize a convolutional signal processing model.
We develop a multigraph learning architecture, including a sampling procedure to reduce computational complexity.
The introduced architecture is applied towards optimal wireless resource allocation and a hate speech localization task, offering improved performance over traditional graph neural networks.
arXiv Detail & Related papers (2022-09-23T00:33:04Z) - Inducing Gaussian Process Networks [80.40892394020797]
We propose inducing Gaussian process networks (IGN), a simple framework for simultaneously learning the feature space as well as the inducing points.
The inducing points, in particular, are learned directly in the feature space, enabling a seamless representation of complex structured domains.
We report on experimental results for real-world data sets showing that IGNs provide significant advances over state-of-the-art methods.
arXiv Detail & Related papers (2022-04-21T05:27:09Z) - Graph Kernel Neural Networks [53.91024360329517]
We propose to use graph kernels, i.e. kernel functions that compute an inner product on graphs, to extend the standard convolution operator to the graph domain.
This allows us to define an entirely structural model that does not require computing the embedding of the input graph.
Our architecture allows to plug-in any type of graph kernels and has the added benefit of providing some interpretability.
arXiv Detail & Related papers (2021-12-14T14:48:08Z) - A Data-Centric Optimization Framework for Machine Learning [9.57755812904772]
We empower deep learning researchers by defining a flexible and user-customizable pipeline for training arbitrary deep neural networks.
The pipeline begins with standard networks in PyTorch or ONNX and transforms through progressive lowering.
We demonstrate competitive performance or speedups on ten different networks, with interactive optimizations discovering new opportunities in EfficientNet.
arXiv Detail & Related papers (2021-10-20T22:07:40Z) - Gone Fishing: Neural Active Learning with Fisher Embeddings [55.08537975896764]
There is an increasing need for active learning algorithms that are compatible with deep neural networks.
This article introduces BAIT, a practical representation of tractable, and high-performing active learning algorithm for neural networks.
arXiv Detail & Related papers (2021-06-17T17:26:31Z) - Improving accuracy and speeding up Document Image Classification through
parallel systems [4.102028235659611]
We show in the RVL-CDIP dataset that we can improve previous results with a much lighter model.
We present an ensemble pipeline which is able to boost solely image input.
Lastly, we expose the training performance differences between PyTorch and Deep Learning frameworks.
arXiv Detail & Related papers (2020-06-16T13:36:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.