Approximate learning of high dimensional Bayesian network structures via
pruning of Candidate Parent Sets
- URL: http://arxiv.org/abs/2006.04753v2
- Date: Thu, 10 Sep 2020 15:48:29 GMT
- Title: Approximate learning of high dimensional Bayesian network structures via
pruning of Candidate Parent Sets
- Authors: Zhigao Guo and Anthony C. Constantinou
- Abstract summary: Approximate solutions exist because exact learning is not applicable to networks of moderate or higher complexity.
Some approximate algorithms are optimised to handle thousands of variables, but may still be unable to learn such high dimensional structures.
This paper explores a strategy towards pruning the size of candidate parent sets, aimed at high dimensionality problems.
- Score: 6.85316573653194
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Score-based algorithms that learn Bayesian Network (BN) structures provide
solutions ranging from different levels of approximate learning to exact
learning. Approximate solutions exist because exact learning is generally not
applicable to networks of moderate or higher complexity. In general,
approximate solutions tend to sacrifice accuracy for speed, where the aim is to
minimise the loss in accuracy and maximise the gain in speed. While some
approximate algorithms are optimised to handle thousands of variables, these
algorithms may still be unable to learn such high dimensional structures. Some
of the most efficient score-based algorithms cast the structure learning
problem as a combinatorial optimisation of candidate parent sets. This paper
explores a strategy towards pruning the size of candidate parent sets, aimed at
high dimensionality problems. The results illustrate how different levels of
pruning affect the learning speed relative to the loss in accuracy in terms of
model fitting, and show that aggressive pruning may be required to produce
approximate solutions for high complexity problems.
Related papers
- A machine learning based algorithm selection method to solve the minimum
cost flow problem [0.8399688944263843]
We train several machine learning classifiers to predict the fastest among a given set of solvers.
It is shown that tree-based models appear to adapt and exploit the relevant structures of the minimum-cost flow problem.
arXiv Detail & Related papers (2022-10-03T16:06:24Z) - Learning with Differentiable Algorithms [6.47243430672461]
This thesis explores combining classic algorithms and machine learning systems like neural networks.
The thesis formalizes the idea of algorithmic supervision, which allows a neural network to learn from or in conjunction with an algorithm.
In addition, this thesis proposes differentiable algorithms, such as differentiable sorting networks, differentiable sorting gates, and differentiable logic gate networks.
arXiv Detail & Related papers (2022-09-01T17:30:00Z) - Training Neural Networks using SAT solvers [1.0152838128195465]
We propose an algorithm to explore the global optimisation method, using SAT solvers, for training a neural net.
In the experiments, we demonstrate the effectiveness of our algorithm against the ADAM optimiser in certain tasks like parity learning.
arXiv Detail & Related papers (2022-06-10T01:31:12Z) - Simple Stochastic and Online Gradient DescentAlgorithms for Pairwise
Learning [65.54757265434465]
Pairwise learning refers to learning tasks where the loss function depends on a pair instances.
Online descent (OGD) is a popular approach to handle streaming data in pairwise learning.
In this paper, we propose simple and online descent to methods for pairwise learning.
arXiv Detail & Related papers (2021-11-23T18:10:48Z) - Towards Optimally Efficient Tree Search with Deep Learning [76.64632985696237]
This paper investigates the classical integer least-squares problem which estimates signals integer from linear models.
The problem is NP-hard and often arises in diverse applications such as signal processing, bioinformatics, communications and machine learning.
We propose a general hyper-accelerated tree search (HATS) algorithm by employing a deep neural network to estimate the optimal estimation for the underlying simplified memory-bounded A* algorithm.
arXiv Detail & Related papers (2021-01-07T08:00:02Z) - Online Model Selection for Reinforcement Learning with Function
Approximation [50.008542459050155]
We present a meta-algorithm that adapts to the optimal complexity with $tildeO(L5/6 T2/3)$ regret.
We also show that the meta-algorithm automatically admits significantly improved instance-dependent regret bounds.
arXiv Detail & Related papers (2020-11-19T10:00:54Z) - MurTree: Optimal Classification Trees via Dynamic Programming and Search [61.817059565926336]
We present a novel algorithm for learning optimal classification trees based on dynamic programming and search.
Our approach uses only a fraction of the time required by the state-of-the-art and can handle datasets with tens of thousands of instances.
arXiv Detail & Related papers (2020-07-24T17:06:55Z) - Inexact Derivative-Free Optimization for Bilevel Learning [0.27074235008521236]
Variational regularization techniques are dominant in the field of mathematical imaging.
A by now common strategy to resolve this issue is to learn these parameters from data.
It is common when solving the upper-level problem to assume access to exact solutions of the lower-level problem, which is practically infeasible.
We propose to solve these problems using inexact derivative-free optimization algorithms which never require exact lower-level problem solutions.
arXiv Detail & Related papers (2020-06-23T00:17:32Z) - Learning to Stop While Learning to Predict [85.7136203122784]
Many algorithm-inspired deep models are restricted to a fixed-depth'' for all inputs.
Similar to algorithms, the optimal depth of a deep architecture may be different for different input instances.
In this paper, we tackle this varying depth problem using a steerable architecture.
We show that the learned deep model along with the stopping policy improves the performances on a diverse set of tasks.
arXiv Detail & Related papers (2020-06-09T07:22:01Z) - Learning with Differentiable Perturbed Optimizers [54.351317101356614]
We propose a systematic method to transform operations into operations that are differentiable and never locally constant.
Our approach relies on perturbeds, and can be used readily together with existing solvers.
We show how this framework can be connected to a family of losses developed in structured prediction, and give theoretical guarantees for their use in learning tasks.
arXiv Detail & Related papers (2020-02-20T11:11:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.