Any Part of Bayesian Network Structure Learning
- URL: http://arxiv.org/abs/2103.13810v1
- Date: Tue, 23 Mar 2021 10:03:31 GMT
- Title: Any Part of Bayesian Network Structure Learning
- Authors: Zhaolong Ling, Kui Yu, Hao Wang, Lin Liu, and Jiuyong Li
- Abstract summary: We study an interesting and challenging problem, learning any part of a Bayesian network (BN) structure.
We first present a new concept of Expand-Backtracking to explain why local BN structure learning methods have the false edge orientation problem.
We then propose APSL, an efficient and accurate Any Part of BN Structure Learning algorithm.
- Score: 17.46459748913491
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We study an interesting and challenging problem, learning any part of a
Bayesian network (BN) structure. In this challenge, it will be computationally
inefficient using existing global BN structure learning algorithms to find an
entire BN structure to achieve the part of a BN structure in which we are
interested. And local BN structure learning algorithms encounter the false edge
orientation problem when they are directly used to tackle this challenging
problem. In this paper, we first present a new concept of Expand-Backtracking
to explain why local BN structure learning methods have the false edge
orientation problem, then propose APSL, an efficient and accurate Any Part of
BN Structure Learning algorithm. Specifically, APSL divides the V-structures in
a Markov blanket (MB) into two types: collider V-structure and non-collider
V-structure, then it starts from a node of interest and recursively finds both
collider V-structures and non-collider V-structures in the found MBs, until the
part of a BN structure in which we are interested are oriented. To improve the
efficiency of APSL, we further design the APSL-FS algorithm using Feature
Selection, APSL-FS. Using six benchmark BNs, the extensive experiments have
validated the efficiency and accuracy of our methods.
Related papers
- Searching for Efficient Linear Layers over a Continuous Space of Structured Matrices [88.33936714942996]
We present a unifying framework that enables searching among all linear operators expressible via an Einstein summation.
We show that differences in the compute-optimal scaling laws are mostly governed by a small number of variables.
We find that Mixture-of-Experts (MoE) learns an MoE in every single linear layer of the model, including the projection in the attention blocks.
arXiv Detail & Related papers (2024-10-03T00:44:50Z) - Scalability of Bayesian Network Structure Elicitation with Large Language Models: a Novel Methodology and Comparative Analysis [5.91003502313675]
We propose a novel method for Bayesian Networks (BNs) structure elicitation based on several LLMs with different experiences.
We compare the method with one alternative method on various widely and not widely known BNs of different sizes and study the scalability of both methods on them.
arXiv Detail & Related papers (2024-07-12T14:52:13Z) - Divide-and-Conquer Strategy for Large-Scale Dynamic Bayesian Network
Structure Learning [13.231953456197946]
Dynamic Bayesian Networks (DBNs) are renowned for their interpretability.
Structure learning of DBNs from data is challenging, particularly for datasets with thousands of variables.
This paper introduces a novel divide-and-conquer strategy, originally developed for static BNs, and adapts it for large-scale DBN structure learning.
arXiv Detail & Related papers (2023-12-04T09:03:06Z) - SE-GSL: A General and Effective Graph Structure Learning Framework
through Structural Entropy Optimization [67.28453445927825]
Graph Neural Networks (GNNs) are de facto solutions to structural data learning.
Existing graph structure learning (GSL) frameworks still lack robustness and interpretability.
This paper proposes a general GSL framework, SE-GSL, through structural entropy and the graph hierarchy abstracted in the encoding tree.
arXiv Detail & Related papers (2023-03-17T05:20:24Z) - Nested Named Entity Recognition as Holistic Structure Parsing [92.8397338250383]
This work models the full nested NEs in a sentence as a holistic structure, then we propose a holistic structure parsing algorithm to disclose the entire NEs once for all.
Experiments show that our model yields promising results on widely-used benchmarks which approach or even achieve state-of-the-art.
arXiv Detail & Related papers (2022-04-17T12:48:20Z) - Learning Bayesian Networks in the Presence of Structural Side
Information [22.734574764075226]
We study the problem of learning a Bayesian network (BN) of a set of variables when structural side information about the system is available.
We develop an algorithm that efficiently incorporates such knowledge into the learning process.
As a consequence of our work, we show that bounded treewidth BNs can be learned with complexity.
arXiv Detail & Related papers (2021-12-20T22:14:19Z) - Feature Selection for Efficient Local-to-Global Bayesian Network
Structure Learning [18.736822756439437]
We propose an efficient F2SL (feature selection-based structure learning) approach to local-to-global BN structure learning.
The F2SL approach first employs the MRMR approach to learn a DAG skeleton, then orients edges in the skeleton.
Compared to the state-of-the-art local-to-global BN learning algorithms, the experiments validated that the proposed algorithms are more efficient and provide competitive structure learning quality.
arXiv Detail & Related papers (2021-12-20T07:44:38Z) - ES-Based Jacobian Enables Faster Bilevel Optimization [53.675623215542515]
Bilevel optimization (BO) has arisen as a powerful tool for solving many modern machine learning problems.
Existing gradient-based methods require second-order derivative approximations via Jacobian- or/and Hessian-vector computations.
We propose a novel BO algorithm, which adopts Evolution Strategies (ES) based method to approximate the response Jacobian matrix in the hypergradient of BO.
arXiv Detail & Related papers (2021-10-13T19:36:50Z) - Unveiling the Potential of Structure-Preserving for Weakly Supervised
Object Localization [71.79436685992128]
We propose a two-stage approach, termed structure-preserving activation (SPA), towards fully leveraging the structure information incorporated in convolutional features for WSOL.
In the first stage, a restricted activation module (RAM) is designed to alleviate the structure-missing issue caused by the classification network.
In the second stage, we propose a post-process approach, termed self-correlation map generating (SCG) module to obtain structure-preserving localization maps.
arXiv Detail & Related papers (2021-03-08T03:04:14Z) - Structured Convolutions for Efficient Neural Network Design [65.36569572213027]
We tackle model efficiency by exploiting redundancy in the textitimplicit structure of the building blocks of convolutional neural networks.
We show how this decomposition can be applied to 2D and 3D kernels as well as the fully-connected layers.
arXiv Detail & Related papers (2020-08-06T04:38:38Z) - Turbocharging Treewidth-Bounded Bayesian Network Structure Learning [26.575053800551633]
We present a new approach for learning the structure of a treewidth-bounded Network (BN)
The key to our approach is applying an exact method (based on MaxSAT) locally to improve the score of aally computed BN.
Our experiments show that our method improves the score of BNs provided by state-of-the-art methods, often significantly.
arXiv Detail & Related papers (2020-06-24T16:13:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.