Structure Learning and Parameter Estimation for Graphical Models via
Penalized Maximum Likelihood Methods
- URL: http://arxiv.org/abs/2301.13269v1
- Date: Mon, 30 Jan 2023 20:26:13 GMT
- Title: Structure Learning and Parameter Estimation for Graphical Models via
Penalized Maximum Likelihood Methods
- Authors: Maryia Shpak (Maria Curie-Sklodowska University in Lublin)
- Abstract summary: In the thesis, we consider two different types of PGMs: Bayesian networks (BNs) which are static, and continuous time Bayesian networks which, as the name suggests, have a temporal component.
We are interested in recovering their true structure, which is the first step in learning any PGM.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Probabilistic graphical models (PGMs) provide a compact and flexible
framework to model very complex real-life phenomena. They combine the
probability theory which deals with uncertainty and logical structure
represented by a graph which allows one to cope with the computational
complexity and also interpret and communicate the obtained knowledge. In the
thesis, we consider two different types of PGMs: Bayesian networks (BNs) which
are static, and continuous time Bayesian networks which, as the name suggests,
have a temporal component. We are interested in recovering their true
structure, which is the first step in learning any PGM. This is a challenging
task, which is interesting in itself from the causal point of view, for the
purposes of interpretation of the model and the decision-making process. All
approaches for structure learning in the thesis are united by the same idea of
maximum likelihood estimation with the LASSO penalty. The problem of structure
learning is reduced to the problem of finding non-zero coefficients in the
LASSO estimator for a generalized linear model. In the case of CTBNs, we
consider the problem both for complete and incomplete data. We support the
theoretical results with experiments.
Related papers
- Towards Compositional Interpretability for XAI [3.3768167170511587]
We present an approach to defining AI models and their interpretability based on category theory.
We compare a wide range of AI models as compositional models.
We find that what makes the standard 'intrinsically interpretable' models so transparent is brought out most clearly diagrammatically.
arXiv Detail & Related papers (2024-06-25T14:27:03Z) - Robust Model Selection of Gaussian Graphical Models [16.933125281564163]
Noise-corrupted samples present significant challenges in graphical model selection.
We propose an algorithm which provably recovers the underlying graph up to the identified ambiguity.
This information is useful in a range of real-world problems, including power grids, social networks, protein-protein interactions, and neural structures.
arXiv Detail & Related papers (2022-11-10T16:50:50Z) - Neural Graphical Models [2.6842860806280058]
We introduce Neural Graphical Models (NGMs) to represent complex feature dependencies with reasonable computational costs.
We capture the dependency structure between the features along with their complex function representations by using a neural network as a multi-task learning framework.
NGMs can fit generic graph structures including directed, undirected and mixed-edge graphs as well as support mixed input data types.
arXiv Detail & Related papers (2022-10-02T07:59:51Z) - Neural Causal Models for Counterfactual Identification and Estimation [62.30444687707919]
We study the evaluation of counterfactual statements through neural models.
First, we show that neural causal models (NCMs) are expressive enough.
Second, we develop an algorithm for simultaneously identifying and estimating counterfactual distributions.
arXiv Detail & Related papers (2022-09-30T18:29:09Z) - Amortized Inference for Causal Structure Learning [72.84105256353801]
Learning causal structure poses a search problem that typically involves evaluating structures using a score or independence test.
We train a variational inference model to predict the causal structure from observational/interventional data.
Our models exhibit robust generalization capabilities under substantial distribution shift.
arXiv Detail & Related papers (2022-05-25T17:37:08Z) - The Causal Neural Connection: Expressiveness, Learnability, and
Inference [125.57815987218756]
An object called structural causal model (SCM) represents a collection of mechanisms and sources of random variation of the system under investigation.
In this paper, we show that the causal hierarchy theorem (Thm. 1, Bareinboim et al., 2020) still holds for neural models.
We introduce a special type of SCM called a neural causal model (NCM), and formalize a new type of inductive bias to encode structural constraints necessary for performing causal inferences.
arXiv Detail & Related papers (2021-07-02T01:55:18Z) - Causal Expectation-Maximisation [70.45873402967297]
We show that causal inference is NP-hard even in models characterised by polytree-shaped graphs.
We introduce the causal EM algorithm to reconstruct the uncertainty about the latent variables from data about categorical manifest variables.
We argue that there appears to be an unnoticed limitation to the trending idea that counterfactual bounds can often be computed without knowledge of the structural equations.
arXiv Detail & Related papers (2020-11-04T10:25:13Z) - Structural Causal Models Are (Solvable by) Credal Networks [70.45873402967297]
Causal inferences can be obtained by standard algorithms for the updating of credal nets.
This contribution should be regarded as a systematic approach to represent structural causal models by credal networks.
Experiments show that approximate algorithms for credal networks can immediately be used to do causal inference in real-size problems.
arXiv Detail & Related papers (2020-08-02T11:19:36Z) - Structural Landmarking and Interaction Modelling: on Resolution Dilemmas
in Graph Classification [50.83222170524406]
We study the intrinsic difficulty in graph classification under the unified concept of resolution dilemmas''
We propose SLIM'', an inductive neural network model for Structural Landmarking and Interaction Modelling.
arXiv Detail & Related papers (2020-06-29T01:01:42Z) - Structure learning for CTBN's via penalized maximum likelihood methods [2.997206383342421]
We study the structure learning problem, which is a more challenging task and the existing research on this topic is limited.
We prove that our algorithm, under mild regularity conditions, recognizes the dependence structure of the graph with high probability.
arXiv Detail & Related papers (2020-06-13T14:28:19Z) - Bayesian network structure learning with causal effects in the presence
of latent variables [6.85316573653194]
This paper describes a hybrid structure learning algorithm, called CCHM, which combines the constraint-based part of cFCI with score-based learning.
Experiments based on both randomised and well-known networks show that CCHM improves the state-of-the-art in terms of reconstructing the true ancestral graph.
arXiv Detail & Related papers (2020-05-29T04:42:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.