How do some Bayesian Network machine learned graphs compare to causal
knowledge?
- URL: http://arxiv.org/abs/2101.10461v2
- Date: Tue, 2 Feb 2021 15:10:57 GMT
- Title: How do some Bayesian Network machine learned graphs compare to causal
knowledge?
- Authors: Anthony C. Constantinou, Norman Fenton, Martin Neil
- Abstract summary: The graph of a Bayesian Network (BN) can be machine learned, determined by causal knowledge, or a combination of both.
This paper focuses on purely machine learned and purely knowledge-based BNs.
It investigates their differences in terms of graphical structure and how well the implied statistical models explain the data.
- Score: 6.5745172279769255
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The graph of a Bayesian Network (BN) can be machine learned, determined by
causal knowledge, or a combination of both. In disciplines like bioinformatics,
applying BN structure learning algorithms can reveal new insights that would
otherwise remain unknown. However, these algorithms are less effective when the
input data are limited in terms of sample size, which is often the case when
working with real data. This paper focuses on purely machine learned and purely
knowledge-based BNs and investigates their differences in terms of graphical
structure and how well the implied statistical models explain the data. The
tests are based on four previous case studies whose BN structure was determined
by domain knowledge. Using various metrics, we compare the knowledge-based
graphs to the machine learned graphs generated from various algorithms
implemented in TETRAD spanning all three classes of learning. The results show
that, while the algorithms produce graphs with much higher model selection
score, the knowledge-based graphs are more accurate predictors of variables of
interest. Maximising score fitting is ineffective in the presence of limited
sample size because the fitting becomes increasingly distorted with limited
data, guiding algorithms towards graphical patterns that share higher fitting
scores and yet deviate considerably from the true graph. This highlights the
value of causal knowledge in these cases, as well as the need for more
appropriate fitting scores suitable for limited data. Lastly, the experiments
also provide new evidence that support the notion that results from simulated
data tell us little about actual real-world performance.
Related papers
- A Full DAG Score-Based Algorithm for Learning Causal Bayesian Networks with Latent Confounders [0.0]
Causal Bayesian networks (CBN) are popular graphical probabilistic models that encode causal relations among variables.
This paper introduces the first fully score-based structure learning algorithm searching the space of DAGs that is capable of identifying the presence of some latent confounders.
arXiv Detail & Related papers (2024-08-20T20:25:56Z) - GOODAT: Towards Test-time Graph Out-of-Distribution Detection [103.40396427724667]
Graph neural networks (GNNs) have found widespread application in modeling graph data across diverse domains.
Recent studies have explored graph OOD detection, often focusing on training a specific model or modifying the data on top of a well-trained GNN.
This paper introduces a data-centric, unsupervised, and plug-and-play solution that operates independently of training data and modifications of GNN architecture.
arXiv Detail & Related papers (2024-01-10T08:37:39Z) - State of the Art and Potentialities of Graph-level Learning [54.68482109186052]
Graph-level learning has been applied to many tasks including comparison, regression, classification, and more.
Traditional approaches to learning a set of graphs rely on hand-crafted features, such as substructures.
Deep learning has helped graph-level learning adapt to the growing scale of graphs by extracting features automatically and encoding graphs into low-dimensional representations.
arXiv Detail & Related papers (2023-01-14T09:15:49Z) - GOOD-D: On Unsupervised Graph Out-Of-Distribution Detection [67.90365841083951]
We develop a new graph contrastive learning framework GOOD-D for detecting OOD graphs without using any ground-truth labels.
GOOD-D is able to capture the latent ID patterns and accurately detect OOD graphs based on the semantic inconsistency in different granularities.
As a pioneering work in unsupervised graph-level OOD detection, we build a comprehensive benchmark to compare our proposed approach with different state-of-the-art methods.
arXiv Detail & Related papers (2022-11-08T12:41:58Z) - Synthetic Graph Generation to Benchmark Graph Learning [7.914804101579097]
Graph learning algorithms have attained state-of-the-art performance on many graph analysis tasks.
One reason is due to the very small number of datasets used in practice to benchmark the performance of graph learning algorithms.
We propose to generate synthetic graphs, and study the behaviour of graph learning algorithms in a controlled scenario.
arXiv Detail & Related papers (2022-04-04T10:48:32Z) - Distributionally Robust Semi-Supervised Learning Over Graphs [68.29280230284712]
Semi-supervised learning (SSL) over graph-structured data emerges in many network science applications.
To efficiently manage learning over graphs, variants of graph neural networks (GNNs) have been developed recently.
Despite their success in practice, most of existing methods are unable to handle graphs with uncertain nodal attributes.
Challenges also arise due to distributional uncertainties associated with data acquired by noisy measurements.
A distributionally robust learning framework is developed, where the objective is to train models that exhibit quantifiable robustness against perturbations.
arXiv Detail & Related papers (2021-10-20T14:23:54Z) - Model-Agnostic Graph Regularization for Few-Shot Learning [60.64531995451357]
We present a comprehensive study on graph embedded few-shot learning.
We introduce a graph regularization approach that allows a deeper understanding of the impact of incorporating graph information between labels.
Our approach improves the performance of strong base learners by up to 2% on Mini-ImageNet and 6.7% on ImageNet-FS.
arXiv Detail & Related papers (2021-02-14T05:28:13Z) - Information fusion between knowledge and data in Bayesian network
structure learning [5.994412766684843]
This paper describes and evaluates a set of information fusion methods that have been implemented in the open-source Bayesys structure learning system.
The results are illustrated both with limited and big data, with application to three BN structure learning algorithms available in Bayesys.
arXiv Detail & Related papers (2021-01-31T15:45:29Z) - A Unifying Generative Model for Graph Learning Algorithms: Label
Propagation, Graph Convolutions, and Combinations [39.8498896531672]
Semi-supervised learning on graphs is a widely applicable problem in network science and machine learning.
We develop a Markov random field model for the data generation process of node attributes.
We show that label propagation, a linearized graph convolutional network, and their combination can all be derived as conditional expectations.
arXiv Detail & Related papers (2021-01-19T17:07:08Z) - NodeNet: A Graph Regularised Neural Network for Node Classification [0.0]
Most AI/ML techniques leave out the linkages among data points.
Recent surge of interest in graph-based AI/ML techniques is aimed to leverage the linkages.
We propose a model using NGL - NodeNet, to solve node classification task for citation graphs.
arXiv Detail & Related papers (2020-06-16T09:41:58Z) - A Heterogeneous Graph with Factual, Temporal and Logical Knowledge for
Question Answering Over Dynamic Contexts [81.4757750425247]
We study question answering over a dynamic textual environment.
We develop a graph neural network over the constructed graph, and train the model in an end-to-end manner.
arXiv Detail & Related papers (2020-04-25T04:53:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.