Effective and efficient structure learning with pruning and model
averaging strategies
- URL: http://arxiv.org/abs/2112.00398v1
- Date: Wed, 1 Dec 2021 10:35:34 GMT
- Title: Effective and efficient structure learning with pruning and model
averaging strategies
- Authors: Anthony C. Constantinou, Yang Liu, Neville K. Kitson, Kiattikun
Chobtham, Zhigao Guo
- Abstract summary: This paper describes an approximate BN structure learning algorithm that combines two novel strategies with hill-climbing search.
The algorithm starts by pruning the search space graphs, where the pruning strategy can be viewed as an aggressive version of the pruning strategies.
It then performs model averaging in the hill-climbing search process and moves to the neighbouring graph that maximises the objective function.
- Score: 9.023722579074734
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Learning the structure of a Bayesian Network (BN) with score-based solutions
involves exploring the search space of possible graphs and moving towards the
graph that maximises a given objective function. Some algorithms offer exact
solutions that guarantee to return the graph with the highest objective score,
while others offer approximate solutions in exchange for reduced computational
complexity. This paper describes an approximate BN structure learning
algorithm, which we call Model Averaging Hill-Climbing (MAHC), that combines
two novel strategies with hill-climbing search. The algorithm starts by pruning
the search space of graphs, where the pruning strategy can be viewed as an
aggressive version of the pruning strategies that are typically applied to
combinatorial optimisation structure learning problems. It then performs model
averaging in the hill-climbing search process and moves to the neighbouring
graph that maximises the objective function, on average, for that neighbouring
graph and over all its valid neighbouring graphs. Comparisons with other
algorithms spanning different classes of learning suggest that the combination
of aggressive pruning with model averaging is both effective and efficient,
particularly in the presence of data noise.
Related papers
- Bayesian Optimization of Functions over Node Subsets in Graphs [14.670181702535825]
We propose a novel framework for optimization on graphs.
We map each $k$-node in the original graph to a node in a new graph.
Experiments under both synthetic and real-world setups demonstrate the effectiveness of the proposed BO framework.
arXiv Detail & Related papers (2024-05-24T00:24:55Z) - Ensemble Quadratic Assignment Network for Graph Matching [52.20001802006391]
Graph matching is a commonly used technique in computer vision and pattern recognition.
Recent data-driven approaches have improved the graph matching accuracy remarkably.
We propose a graph neural network (GNN) based approach to combine the advantages of data-driven and traditional methods.
arXiv Detail & Related papers (2024-03-11T06:34:05Z) - Learning Heuristics for the Maximum Clique Enumeration Problem Using Low
Dimensional Representations [0.0]
We use a learning framework for a pruning process of the input graph towards reducing the clique of the maximum enumeration problem.
We study the role of using different vertex representations on the performance of this runtime method.
We observe that using local graph features in the classification process produce more accurate results when combined with a feature elimination process.
arXiv Detail & Related papers (2022-10-30T22:04:32Z) - Subgraph Matching via Query-Conditioned Subgraph Matching Neural
Networks and Bi-Level Tree Search [33.9052190473029]
Subgraph Matching is a core operation in graph database search, biomedical analysis, social group finding, etc.
In this paper, we propose a novel encoder-decoder neural network architecture to dynamically compute the matching information between the query and the target graphs.
Experiments on five large real-world target graphs show that N-BLS can significantly improve the subgraph matching performance.
arXiv Detail & Related papers (2022-07-21T04:47:21Z) - Sublinear Algorithms for Hierarchical Clustering [14.124026862687941]
We study hierarchical clustering for massive graphs under three well-studied models of sublinear computation.
We design sublinear algorithms for hierarchical clustering in all three models.
arXiv Detail & Related papers (2022-06-15T16:25:27Z) - Optimal Propagation for Graph Neural Networks [51.08426265813481]
We propose a bi-level optimization approach for learning the optimal graph structure.
We also explore a low-rank approximation model for further reducing the time complexity.
arXiv Detail & Related papers (2022-05-06T03:37:00Z) - A Differentiable Approach to Combinatorial Optimization using Dataless
Neural Networks [20.170140039052455]
We propose a radically different approach in that no data is required for training the neural networks that produce the solution.
In particular, we reduce the optimization problem to a neural network and employ a dataless training scheme to refine the parameters of the network such that those parameters yield the structure of interest.
arXiv Detail & Related papers (2022-03-15T19:21:31Z) - Reinforcement Learning Based Query Vertex Ordering Model for Subgraph
Matching [58.39970828272366]
Subgraph matching algorithms enumerate all is embeddings of a query graph in a data graph G.
matching order plays a critical role in time efficiency of these backtracking based subgraph matching algorithms.
In this paper, for the first time we apply the Reinforcement Learning (RL) and Graph Neural Networks (GNNs) techniques to generate the high-quality matching order for subgraph matching algorithms.
arXiv Detail & Related papers (2022-01-25T00:10:03Z) - Effective and Efficient Graph Learning for Multi-view Clustering [173.8313827799077]
We propose an effective and efficient graph learning model for multi-view clustering.
Our method exploits the view-similar between graphs of different views by the minimization of tensor Schatten p-norm.
Our proposed algorithm is time-economical and obtains the stable results and scales well with the data size.
arXiv Detail & Related papers (2021-08-15T13:14:28Z) - A Bi-Level Framework for Learning to Solve Combinatorial Optimization on
Graphs [91.07247251502564]
We propose a hybrid approach to combine the best of the two worlds, in which a bi-level framework is developed with an upper-level learning method to optimize the graph.
Such a bi-level approach simplifies the learning on the original hard CO and can effectively mitigate the demand for model capacity.
arXiv Detail & Related papers (2021-06-09T09:18:18Z) - Structured Graph Learning for Clustering and Semi-supervised
Classification [74.35376212789132]
We propose a graph learning framework to preserve both the local and global structure of data.
Our method uses the self-expressiveness of samples to capture the global structure and adaptive neighbor approach to respect the local structure.
Our model is equivalent to a combination of kernel k-means and k-means methods under certain condition.
arXiv Detail & Related papers (2020-08-31T08:41:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.