Learning Discretized Bayesian Networks with GOMEA
- URL: http://arxiv.org/abs/2402.12175v1
- Date: Mon, 19 Feb 2024 14:29:35 GMT
- Title: Learning Discretized Bayesian Networks with GOMEA
- Authors: Damy M.F. Ha, Tanja Alderliesten, Peter A.N. Bosman
- Abstract summary: We extend an existing state-of-the-art structure learning approach to jointly learn variable discretizations.
We show how this enables incorporating expert knowledge in a uniquely insightful fashion, finding multiple DBNs that trade-off complexity, accuracy, and the difference with a pre-determined expert network.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Bayesian networks model relationships between random variables under
uncertainty and can be used to predict the likelihood of events and outcomes
while incorporating observed evidence. From an eXplainable AI (XAI)
perspective, such models are interesting as they tend to be compact. Moreover,
captured relations can be directly inspected by domain experts. In practice,
data is often real-valued. Unless assumptions of normality can be made,
discretization is often required. The optimal discretization, however, depends
on the relations modelled between the variables. This complicates learning
Bayesian networks from data. For this reason, most literature focuses on
learning conditional dependencies between sets of variables, called structure
learning. In this work, we extend an existing state-of-the-art structure
learning approach based on the Gene-pool Optimal Mixing Evolutionary Algorithm
(GOMEA) to jointly learn variable discretizations. The proposed Discretized
Bayesian Network GOMEA (DBN-GOMEA) obtains similar or better results than the
current state-of-the-art when tasked to retrieve randomly generated
ground-truth networks. Moreover, leveraging a key strength of evolutionary
algorithms, we can straightforwardly perform DBN learning multi-objectively. We
show how this enables incorporating expert knowledge in a uniquely insightful
fashion, finding multiple DBNs that trade-off complexity, accuracy, and the
difference with a pre-determined expert network.
Related papers
- VertiBayes: Learning Bayesian network parameters from vertically partitioned data with missing values [2.9707233220536313]
Federated learning makes it possible to train a machine learning model on decentralized data.
We propose a novel method called VertiBayes to train Bayesian networks on vertically partitioned data.
We experimentally show our approach produces models comparable to those learnt using traditional algorithms.
arXiv Detail & Related papers (2022-10-31T11:13:35Z) - Detection of Interacting Variables for Generalized Linear Models via
Neural Networks [0.0]
We present an approach to automating the process of finding interactions that should be added to generalized linear models (GLMs)
Our approach relies on neural networks and a model-specific interaction detection method, which is computationally faster than the traditionally used methods like Friedman H-Statistic or SHAP values.
In numerical studies, we provide the results of our approach on artificially generated data as well as open-source data.
arXiv Detail & Related papers (2022-09-16T16:16:45Z) - HyperImpute: Generalized Iterative Imputation with Automatic Model
Selection [77.86861638371926]
We propose a generalized iterative imputation framework for adaptively and automatically configuring column-wise models.
We provide a concrete implementation with out-of-the-box learners, simulators, and interfaces.
arXiv Detail & Related papers (2022-06-15T19:10:35Z) - FiLM-Ensemble: Probabilistic Deep Learning via Feature-wise Linear
Modulation [69.34011200590817]
We introduce FiLM-Ensemble, a deep, implicit ensemble method based on the concept of Feature-wise Linear Modulation.
By modulating the network activations of a single deep network with FiLM, one obtains a model ensemble with high diversity.
We show that FiLM-Ensemble outperforms other implicit ensemble methods, and it comes very close to the upper bound of an explicit ensemble of networks.
arXiv Detail & Related papers (2022-05-31T18:33:15Z) - Amortized Inference for Causal Structure Learning [72.84105256353801]
Learning causal structure poses a search problem that typically involves evaluating structures using a score or independence test.
We train a variational inference model to predict the causal structure from observational/interventional data.
Our models exhibit robust generalization capabilities under substantial distribution shift.
arXiv Detail & Related papers (2022-05-25T17:37:08Z) - Towards Federated Bayesian Network Structure Learning with Continuous
Optimization [14.779035801521717]
We present a cross-silo federated learning approach to estimate the structure of Bayesian network.
We develop a distributed structure learning method based on continuous optimization.
arXiv Detail & Related papers (2021-10-18T14:36:05Z) - Prequential MDL for Causal Structure Learning with Neural Networks [9.669269791955012]
We show that the prequential minimum description length principle can be used to derive a practical scoring function for Bayesian networks.
We obtain plausible and parsimonious graph structures without relying on sparsity inducing priors or other regularizers which must be tuned.
We discuss how the the prequential score relates to recent work that infers causal structure from the speed of adaptation when the observations come from a source undergoing distributional shift.
arXiv Detail & Related papers (2021-07-02T22:35:21Z) - Fully differentiable model discovery [0.0]
We propose an approach by combining neural network based surrogates with Sparse Bayesian Learning.
Our work expands PINNs to various types of neural network architectures, and connects neural network-based surrogates to the rich field of Bayesian parameter inference.
arXiv Detail & Related papers (2021-06-09T08:11:23Z) - Towards Understanding Ensemble, Knowledge Distillation and
Self-Distillation in Deep Learning [93.18238573921629]
We study how Ensemble of deep learning models can improve test accuracy, and how the superior performance of ensemble can be distilled into a single model.
We show that ensemble/knowledge distillation in deep learning works very differently from traditional learning theory.
We prove that self-distillation can also be viewed as implicitly combining ensemble and knowledge distillation to improve test accuracy.
arXiv Detail & Related papers (2020-12-17T18:34:45Z) - Relation-Guided Representation Learning [53.60351496449232]
We propose a new representation learning method that explicitly models and leverages sample relations.
Our framework well preserves the relations between samples.
By seeking to embed samples into subspace, we show that our method can address the large-scale and out-of-sample problem.
arXiv Detail & Related papers (2020-07-11T10:57:45Z) - Connecting the Dots: Multivariate Time Series Forecasting with Graph
Neural Networks [91.65637773358347]
We propose a general graph neural network framework designed specifically for multivariate time series data.
Our approach automatically extracts the uni-directed relations among variables through a graph learning module.
Our proposed model outperforms the state-of-the-art baseline methods on 3 of 4 benchmark datasets.
arXiv Detail & Related papers (2020-05-24T04:02:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.