Learning Bayesian Networks in the Presence of Structural Side
Information
- URL: http://arxiv.org/abs/2112.10884v1
- Date: Mon, 20 Dec 2021 22:14:19 GMT
- Title: Learning Bayesian Networks in the Presence of Structural Side
Information
- Authors: Ehsan Mokhtarian, Sina Akbari, Fateme Jamshidi, Jalal Etesami, Negar
Kiyavash
- Abstract summary: We study the problem of learning a Bayesian network (BN) of a set of variables when structural side information about the system is available.
We develop an algorithm that efficiently incorporates such knowledge into the learning process.
As a consequence of our work, we show that bounded treewidth BNs can be learned with complexity.
- Score: 22.734574764075226
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We study the problem of learning a Bayesian network (BN) of a set of
variables when structural side information about the system is available. It is
well known that learning the structure of a general BN is both computationally
and statistically challenging. However, often in many applications, side
information about the underlying structure can potentially reduce the learning
complexity. In this paper, we develop a recursive constraint-based algorithm
that efficiently incorporates such knowledge (i.e., side information) into the
learning process. In particular, we study two types of structural side
information about the underlying BN: (I) an upper bound on its clique number is
known, or (II) it is diamond-free. We provide theoretical guarantees for the
learning algorithms, including the worst-case number of tests required in each
scenario. As a consequence of our work, we show that bounded treewidth BNs can
be learned with polynomial complexity. Furthermore, we evaluate the performance
and the scalability of our algorithms in both synthetic and real-world
structures and show that they outperform the state-of-the-art structure
learning algorithms.
Related papers
- Learning Representations for Reasoning: Generalizing Across Diverse Structures [5.031093893882575]
We aim to push the boundary of reasoning models by devising algorithms that generalize across knowledge and query structures.
Our library treats structured data as first-class citizens and removes the barrier for developing algorithms on structured data.
arXiv Detail & Related papers (2024-10-16T20:23:37Z) - On the Role of Information Structure in Reinforcement Learning for Partially-Observable Sequential Teams and Games [55.2480439325792]
In a sequential decision-making problem, the information structure is the description of how events in the system occurring at different points in time affect each other.
By contrast, real-world sequential decision-making problems typically involve a complex and time-varying interdependence of system variables.
We formalize a novel reinforcement learning model which explicitly represents the information structure.
arXiv Detail & Related papers (2024-03-01T21:28:19Z) - Causal discovery using dynamically requested knowledge [7.904709685523615]
Causal Bayesian Networks (CBNs) are an important tool for reasoning under uncertainty in complex real-world systems.
We investigate a novel approach where the structure learning algorithm itself dynamically identifies and requests knowledge for relationships that the algorithm identifies as uncertain.
We show that it offers considerable gains in structural accuracy, which are generally larger than those offered by existing approaches for integrating knowledge.
arXiv Detail & Related papers (2023-10-17T11:21:23Z) - When Do Program-of-Thoughts Work for Reasoning? [51.2699797837818]
We propose complexity-impacted reasoning score (CIRS) to measure correlation between code and reasoning abilities.
Specifically, we use the abstract syntax tree to encode the structural information and calculate logical complexity.
Code will be integrated into the EasyInstruct framework at https://github.com/zjunlp/EasyInstruct.
arXiv Detail & Related papers (2023-08-29T17:22:39Z) - Learning the Finer Things: Bayesian Structure Learning at the
Instantiation Level [0.0]
Successful machine learning methods require a trade-off between memorization and generalization.
We present a novel probabilistic graphical model structure learning approach that can learn, generalize and explain in elusive domains.
arXiv Detail & Related papers (2023-03-08T02:31:49Z) - A survey of Bayesian Network structure learning [8.411014222942168]
This paper provides a review of 61 algorithms proposed for learning BN structure from data.
The basic approach of each algorithm is described in consistent terms, and the similarities and differences between them highlighted.
Approaches for dealing with data noise in real-world datasets and incorporating expert knowledge into the learning process are also covered.
arXiv Detail & Related papers (2021-09-23T14:54:00Z) - A neural anisotropic view of underspecification in deep learning [60.119023683371736]
We show that the way neural networks handle the underspecification of problems is highly dependent on the data representation.
Our results highlight that understanding the architectural inductive bias in deep learning is fundamental to address the fairness, robustness, and generalization of these systems.
arXiv Detail & Related papers (2021-04-29T14:31:09Z) - Any Part of Bayesian Network Structure Learning [17.46459748913491]
We study an interesting and challenging problem, learning any part of a Bayesian network (BN) structure.
We first present a new concept of Expand-Backtracking to explain why local BN structure learning methods have the false edge orientation problem.
We then propose APSL, an efficient and accurate Any Part of BN Structure Learning algorithm.
arXiv Detail & Related papers (2021-03-23T10:03:31Z) - A Constraint-Based Algorithm for the Structural Learning of
Continuous-Time Bayesian Networks [70.88503833248159]
We propose the first constraint-based algorithm for learning the structure of continuous-time Bayesian networks.
We discuss the different statistical tests and the underlying hypotheses used by our proposal to establish conditional independence.
arXiv Detail & Related papers (2020-07-07T07:34:09Z) - Provably Efficient Exploration for Reinforcement Learning Using
Unsupervised Learning [96.78504087416654]
Motivated by the prevailing paradigm of using unsupervised learning for efficient exploration in reinforcement learning (RL) problems, we investigate when this paradigm is provably efficient.
We present a general algorithmic framework that is built upon two components: an unsupervised learning algorithm and a noregret tabular RL algorithm.
arXiv Detail & Related papers (2020-03-15T19:23:59Z) - A Theory of Usable Information Under Computational Constraints [103.5901638681034]
We propose a new framework for reasoning about information in complex systems.
Our foundation is based on a variational extension of Shannon's information theory.
We show that by incorporating computational constraints, $mathcalV$-information can be reliably estimated from data.
arXiv Detail & Related papers (2020-02-25T06:09:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.