Improved Acyclicity Reasoning for Bayesian Network Structure Learning
with Constraint Programming
- URL: http://arxiv.org/abs/2106.12269v1
- Date: Wed, 23 Jun 2021 09:46:11 GMT
- Title: Improved Acyclicity Reasoning for Bayesian Network Structure Learning
with Constraint Programming
- Authors: Fulya Tr\"osser (MIAT INRA), Simon de Givry (MIAT INRA), George
Katsirelos (MIA-Paris)
- Abstract summary: Learning the structure of a Bayesian network (BNSL) from discrete data is known to be an NP-hard task.
In this work, we propose a new time algorithm for discovering a subset of all possible cluster cuts.
We show that, despite being suboptimal, they improve performance by orders of magnitude.
The resulting solver also compares favourably with GOBNILP, a state-of-the-art solver for the BNSL problem.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Bayesian networks are probabilistic graphical models with a wide range of
application areas including gene regulatory networks inference, risk analysis
and image processing. Learning the structure of a Bayesian network (BNSL) from
discrete data is known to be an NP-hard task with a superexponential search
space of directed acyclic graphs. In this work, we propose a new polynomial
time algorithm for discovering a subset of all possible cluster cuts, a greedy
algorithm for approximately solving the resulting linear program, and a
generalised arc consistency algorithm for the acyclicity constraint. We embed
these in the constraint programmingbased branch-and-bound solver CPBayes and
show that, despite being suboptimal, they improve performance by orders of
magnitude. The resulting solver also compares favourably with GOBNILP, a
state-of-the-art solver for the BNSL problem which solves an NP-hard problem to
discover each cut and solves the linear program exactly.
Related papers
- Are Graph Neural Networks Optimal Approximation Algorithms? [26.5364112420121]
We design graph neural network architectures that capture optimal approximation algorithms for a class of optimization problems.
We take advantage of OptGNN's ability to capture convex relaxations to design an algorithm for producing bounds on the optimal solution from the learned embeddings of OptGNN.
arXiv Detail & Related papers (2023-10-01T00:12:31Z) - Let the Flows Tell: Solving Graph Combinatorial Optimization Problems
with GFlowNets [86.43523688236077]
Combinatorial optimization (CO) problems are often NP-hard and out of reach for exact algorithms.
GFlowNets have emerged as a powerful machinery to efficiently sample from composite unnormalized densities sequentially.
In this paper, we design Markov decision processes (MDPs) for different problems and propose to train conditional GFlowNets to sample from the solution space.
arXiv Detail & Related papers (2023-05-26T15:13:09Z) - Decision Diagram-Based Branch-and-Bound with Caching for Dominance and
Suboptimality Detection [9.175779296469194]
This paper presents new ingredients to speed up the search by exploiting the structure of dynamic programming models.
The key idea is to prevent the repeated expansion of nodes corresponding to the same dynamic programming states by querying expansion thresholds cached throughout the search.
Experiments show that the pruning brought by this caching mechanism allows significantly reducing the number of nodes expanded by the algorithm.
arXiv Detail & Related papers (2022-11-22T10:18:33Z) - An Inexact Augmented Lagrangian Algorithm for Training Leaky ReLU Neural
Network with Group Sparsity [13.27709100571336]
A leaky ReLU network with a group regularization term has been widely used in the recent years.
We show that there is a lack of approaches to compute a stationary point deterministically.
We propose an inexact augmented Lagrangian algorithm for solving the new model.
arXiv Detail & Related papers (2022-05-11T11:53:15Z) - A Differentiable Approach to Combinatorial Optimization using Dataless
Neural Networks [20.170140039052455]
We propose a radically different approach in that no data is required for training the neural networks that produce the solution.
In particular, we reduce the optimization problem to a neural network and employ a dataless training scheme to refine the parameters of the network such that those parameters yield the structure of interest.
arXiv Detail & Related papers (2022-03-15T19:21:31Z) - Learning to Detect Critical Nodes in Sparse Graphs via Feature Importance Awareness [53.351863569314794]
The critical node problem (CNP) aims to find a set of critical nodes from a network whose deletion maximally degrades the pairwise connectivity of the residual network.
This work proposes a feature importance-aware graph attention network for node representation.
It combines it with dueling double deep Q-network to create an end-to-end algorithm to solve CNP for the first time.
arXiv Detail & Related papers (2021-12-03T14:23:05Z) - Unfolding Projection-free SDP Relaxation of Binary Graph Classifier via
GDPA Linearization [59.87663954467815]
Algorithm unfolding creates an interpretable and parsimonious neural network architecture by implementing each iteration of a model-based algorithm as a neural layer.
In this paper, leveraging a recent linear algebraic theorem called Gershgorin disc perfect alignment (GDPA), we unroll a projection-free algorithm for semi-definite programming relaxation (SDR) of a binary graph.
Experimental results show that our unrolled network outperformed pure model-based graph classifiers, and achieved comparable performance to pure data-driven networks but using far fewer parameters.
arXiv Detail & Related papers (2021-09-10T07:01:15Z) - Towards Optimally Efficient Tree Search with Deep Learning [76.64632985696237]
This paper investigates the classical integer least-squares problem which estimates signals integer from linear models.
The problem is NP-hard and often arises in diverse applications such as signal processing, bioinformatics, communications and machine learning.
We propose a general hyper-accelerated tree search (HATS) algorithm by employing a deep neural network to estimate the optimal estimation for the underlying simplified memory-bounded A* algorithm.
arXiv Detail & Related papers (2021-01-07T08:00:02Z) - An Integer Linear Programming Framework for Mining Constraints from Data [81.60135973848125]
We present a general framework for mining constraints from data.
In particular, we consider the inference in structured output prediction as an integer linear programming (ILP) problem.
We show that our approach can learn to solve 9x9 Sudoku puzzles and minimal spanning tree problems from examples without providing the underlying rules.
arXiv Detail & Related papers (2020-06-18T20:09:53Z) - Consistent Second-Order Conic Integer Programming for Learning Bayesian
Networks [2.7473982588529653]
We study the problem of learning the sparse DAG structure of a BN from continuous observational data.
The optimal solution to this mathematical program is known to have desirable statistical properties under certain conditions.
We propose a concrete early stopping criterion to terminate the branch-and-bound process in order to obtain a near-optimal solution.
arXiv Detail & Related papers (2020-05-29T00:13:15Z) - Channel Assignment in Uplink Wireless Communication using Machine
Learning Approach [54.012791474906514]
This letter investigates a channel assignment problem in uplink wireless communication systems.
Our goal is to maximize the sum rate of all users subject to integer channel assignment constraints.
Due to high computational complexity, machine learning approaches are employed to obtain computational efficient solutions.
arXiv Detail & Related papers (2020-01-12T15:54:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.