Learning and interpreting asymmetry-labeled DAGs: a case study on
COVID-19 fear
- URL: http://arxiv.org/abs/2301.00629v1
- Date: Mon, 2 Jan 2023 12:48:17 GMT
- Title: Learning and interpreting asymmetry-labeled DAGs: a case study on
COVID-19 fear
- Authors: Manuele Leonelli and Gherardo Varando
- Abstract summary: Asymmetry-labeled DAGs have been proposed to both extend the class of Bayesian networks by relaxing the symmetric assumption of independence.
We introduce novel structural learning algorithms for this class of models which, whilst being efficient, allow for a straightforward interpretation of the underlying dependence structure.
A real-world data application using data from the Fear of COVID-19 Scale collected in Italy showcases their use in practice.
- Score: 2.3572498744567127
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Bayesian networks are widely used to learn and reason about the dependence
structure of discrete variables. However, they are only capable of formally
encoding symmetric conditional independence, which in practice is often too
strict to hold. Asymmetry-labeled DAGs have been recently proposed to both
extend the class of Bayesian networks by relaxing the symmetric assumption of
independence and denote the type of dependence existing between the variables
of interest. Here, we introduce novel structural learning algorithms for this
class of models which, whilst being efficient, allow for a straightforward
interpretation of the underlying dependence structure. A comprehensive
computational study highlights the efficiency of the algorithms. A real-world
data application using data from the Fear of COVID-19 Scale collected in Italy
showcases their use in practice.
Related papers
- Induced Covariance for Causal Discovery in Linear Sparse Structures [55.2480439325792]
Causal models seek to unravel the cause-effect relationships among variables from observed data.
This paper introduces a novel causal discovery algorithm designed for settings in which variables exhibit linearly sparse relationships.
arXiv Detail & Related papers (2024-10-02T04:01:38Z) - Learning Discretized Bayesian Networks with GOMEA [0.0]
We extend an existing state-of-the-art structure learning approach to jointly learn variable discretizations.
We show how this enables incorporating expert knowledge in a uniquely insightful fashion, finding multiple DBNs that trade-off complexity, accuracy, and the difference with a pre-determined expert network.
arXiv Detail & Related papers (2024-02-19T14:29:35Z) - Distributionally Robust Skeleton Learning of Discrete Bayesian Networks [9.46389554092506]
We consider the problem of learning the exact skeleton of general discrete Bayesian networks from potentially corrupted data.
We propose to optimize the most adverse risk over a family of distributions within bounded Wasserstein distance or KL divergence to the empirical distribution.
We present efficient algorithms and show the proposed methods are closely related to the standard regularized regression approach.
arXiv Detail & Related papers (2023-11-10T15:33:19Z) - Symmetric Equilibrium Learning of VAEs [56.56929742714685]
We view variational autoencoders (VAEs) as decoder-encoder pairs, which map distributions in the data space to distributions in the latent space and vice versa.
We propose a Nash equilibrium learning approach, which is symmetric with respect to the encoder and decoder and allows learning VAEs in situations where both the data and the latent distributions are accessible only by sampling.
arXiv Detail & Related papers (2023-07-19T10:27:34Z) - Disentanglement via Latent Quantization [60.37109712033694]
In this work, we construct an inductive bias towards encoding to and decoding from an organized latent space.
We demonstrate the broad applicability of this approach by adding it to both basic data-re (vanilla autoencoder) and latent-reconstructing (InfoGAN) generative models.
arXiv Detail & Related papers (2023-05-28T06:30:29Z) - Self-learn to Explain Siamese Networks Robustly [22.913886901196353]
Learning to compare two objects are used in digital forensics, face recognition, brain network analysis, especially when labeled data is scarce.
As these applications make high-stake decisions involving societal values like fairness and imbalance, it is critical to explain learned models.
arXiv Detail & Related papers (2021-09-15T15:28:39Z) - Differential Privacy and Byzantine Resilience in SGD: Do They Add Up? [6.614755043607777]
We study whether a distributed implementation of the renowned Gradient Descent (SGD) learning algorithm is feasible with both differential privacy (DP) and $(alpha,f)$-Byzantine resilience.
We show that a direct composition of these techniques makes the guarantees of the resulting SGD algorithm depend unfavourably upon the number of parameters in the ML model.
arXiv Detail & Related papers (2021-02-16T14:10:38Z) - Disentangling Observed Causal Effects from Latent Confounders using
Method of Moments [67.27068846108047]
We provide guarantees on identifiability and learnability under mild assumptions.
We develop efficient algorithms based on coupled tensor decomposition with linear constraints to obtain scalable and guaranteed solutions.
arXiv Detail & Related papers (2021-01-17T07:48:45Z) - Relation-Guided Representation Learning [53.60351496449232]
We propose a new representation learning method that explicitly models and leverages sample relations.
Our framework well preserves the relations between samples.
By seeking to embed samples into subspace, we show that our method can address the large-scale and out-of-sample problem.
arXiv Detail & Related papers (2020-07-11T10:57:45Z) - A Constraint-Based Algorithm for the Structural Learning of
Continuous-Time Bayesian Networks [70.88503833248159]
We propose the first constraint-based algorithm for learning the structure of continuous-time Bayesian networks.
We discuss the different statistical tests and the underlying hypotheses used by our proposal to establish conditional independence.
arXiv Detail & Related papers (2020-07-07T07:34:09Z) - Automated extraction of mutual independence patterns using Bayesian
comparison of partition models [7.6146285961466]
Mutual independence is a key concept in statistics that characterizes the structural relationships between variables.
Existing methods to investigate mutual independence rely on the definition of two competing models.
We propose a general Markov chain Monte Carlo (MCMC) algorithm to numerically approximate the posterior distribution on the space of all patterns of mutual independence.
arXiv Detail & Related papers (2020-01-15T16:21:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.