Turbocharging Treewidth-Bounded Bayesian Network Structure Learning
- URL: http://arxiv.org/abs/2006.13843v2
- Date: Fri, 5 Feb 2021 16:02:25 GMT
- Title: Turbocharging Treewidth-Bounded Bayesian Network Structure Learning
- Authors: Vaidyanathan P. R. and Stefan Szeider
- Abstract summary: We present a new approach for learning the structure of a treewidth-bounded Network (BN)
The key to our approach is applying an exact method (based on MaxSAT) locally to improve the score of aally computed BN.
Our experiments show that our method improves the score of BNs provided by state-of-the-art methods, often significantly.
- Score: 26.575053800551633
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We present a new approach for learning the structure of a treewidth-bounded
Bayesian Network (BN). The key to our approach is applying an exact method
(based on MaxSAT) locally, to improve the score of a heuristically computed BN.
This approach allows us to scale the power of exact methods -- so far only
applicable to BNs with several dozens of random variables -- to large BNs with
several thousands of random variables. Our experiments show that our method
improves the score of BNs provided by state-of-the-art heuristic methods, often
significantly.
Related papers
- Scalability of Bayesian Network Structure Elicitation with Large Language Models: a Novel Methodology and Comparative Analysis [5.91003502313675]
We propose a novel method for Bayesian Networks (BNs) structure elicitation based on several LLMs with different experiences.
We compare the method with one alternative method on various widely and not widely known BNs of different sizes and study the scalability of both methods on them.
arXiv Detail & Related papers (2024-07-12T14:52:13Z) - Divide-and-Conquer Strategy for Large-Scale Dynamic Bayesian Network
Structure Learning [13.231953456197946]
Dynamic Bayesian Networks (DBNs) are renowned for their interpretability.
Structure learning of DBNs from data is challenging, particularly for datasets with thousands of variables.
This paper introduces a novel divide-and-conquer strategy, originally developed for static BNs, and adapts it for large-scale DBN structure learning.
arXiv Detail & Related papers (2023-12-04T09:03:06Z) - The Cascaded Forward Algorithm for Neural Network Training [61.06444586991505]
We propose a new learning framework for neural networks, namely Cascaded Forward (CaFo) algorithm, which does not rely on BP optimization as that in FF.
Unlike FF, our framework directly outputs label distributions at each cascaded block, which does not require generation of additional negative samples.
In our framework each block can be trained independently, so it can be easily deployed into parallel acceleration systems.
arXiv Detail & Related papers (2023-03-17T02:01:11Z) - ES-Based Jacobian Enables Faster Bilevel Optimization [53.675623215542515]
Bilevel optimization (BO) has arisen as a powerful tool for solving many modern machine learning problems.
Existing gradient-based methods require second-order derivative approximations via Jacobian- or/and Hessian-vector computations.
We propose a novel BO algorithm, which adopts Evolution Strategies (ES) based method to approximate the response Jacobian matrix in the hypergradient of BO.
arXiv Detail & Related papers (2021-10-13T19:36:50Z) - Improved Branch and Bound for Neural Network Verification via Lagrangian
Decomposition [161.09660864941603]
We improve the scalability of Branch and Bound (BaB) algorithms for formally proving input-output properties of neural networks.
We present a novel activation-based branching strategy and a BaB framework, named Branch and Dual Network Bound (BaDNB)
BaDNB outperforms previous complete verification systems by a large margin, cutting average verification times by factors up to 50 on adversarial properties.
arXiv Detail & Related papers (2021-04-14T09:22:42Z) - All at Once Network Quantization via Collaborative Knowledge Transfer [56.95849086170461]
We develop a novel collaborative knowledge transfer approach for efficiently training the all-at-once quantization network.
Specifically, we propose an adaptive selection strategy to choose a high-precision enquoteteacher for transferring knowledge to the low-precision student.
To effectively transfer knowledge, we develop a dynamic block swapping method by randomly replacing the blocks in the lower-precision student network with the corresponding blocks in the higher-precision teacher network.
arXiv Detail & Related papers (2021-03-02T03:09:03Z) - Sandwich Batch Normalization [96.2529041037824]
We present Sandwich Batch Normalization (SaBN), an easy improvement of Batch Normalization (BN) with only a few lines of code changes.
Our SaBN factorizes the BN affine layer into one shared sandwich affine layer, cascaded by several parallel independent affine layers.
We demonstrate the prevailing effectiveness of SaBN as a drop-in replacement in four tasks.
arXiv Detail & Related papers (2021-02-22T22:09:43Z) - On Resource-Efficient Bayesian Network Classifiers and Deep Neural
Networks [14.540226579203207]
We present two methods to reduce the complexity of Bayesian network (BN) classifiers.
First, we introduce quantization-aware training using the straight-through gradient estimator to quantize the parameters of BNs to few bits.
Second, we extend a recently proposed differentiable tree-augmented naive Bayes (TAN) structure learning approach by also considering the model size.
arXiv Detail & Related papers (2020-10-22T14:47:55Z) - Differentiable TAN Structure Learning for Bayesian Network Classifiers [19.30562170076368]
We consider learning of tree-augmented naive Bayes (TAN) structures for Bayesian network classifiers with discrete input features.
Instead of performing a optimization over the space of possible graph structures, the proposed method learns a distribution over graph structures.
Our method consistently outperforms random TAN structures and Chow-Liu TAN structures.
arXiv Detail & Related papers (2020-08-21T16:22:47Z) - ESPN: Extremely Sparse Pruned Networks [50.436905934791035]
We show that a simple iterative mask discovery method can achieve state-of-the-art compression of very deep networks.
Our algorithm represents a hybrid approach between single shot network pruning methods and Lottery-Ticket type approaches.
arXiv Detail & Related papers (2020-06-28T23:09:27Z) - A Framework for Neural Network Pruning Using Gibbs Distributions [34.0576955010317]
Gibbs pruning is a novel framework for expressing and designing neural network pruning methods.
It can train and prune a network simultaneously in such a way that the learned weights and pruning mask are well-adapted for each other.
We achieve a new state-of-the-art result for pruning ResNet-56 with the CIFAR-10 dataset.
arXiv Detail & Related papers (2020-06-08T23:04:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.