Feature Selection for Efficient Local-to-Global Bayesian Network
Structure Learning
- URL: http://arxiv.org/abs/2112.10369v1
- Date: Mon, 20 Dec 2021 07:44:38 GMT
- Title: Feature Selection for Efficient Local-to-Global Bayesian Network
Structure Learning
- Authors: Kui Yu, Zhaolong Ling, Lin Liu, Hao Wang, Jiuyong Li
- Abstract summary: We propose an efficient F2SL (feature selection-based structure learning) approach to local-to-global BN structure learning.
The F2SL approach first employs the MRMR approach to learn a DAG skeleton, then orients edges in the skeleton.
Compared to the state-of-the-art local-to-global BN learning algorithms, the experiments validated that the proposed algorithms are more efficient and provide competitive structure learning quality.
- Score: 18.736822756439437
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Local-to-global learning approach plays an essential role in Bayesian network
(BN) structure learning. Existing local-to-global learning algorithms first
construct the skeleton of a DAG (directed acyclic graph) by learning the MB
(Markov blanket) or PC (parents and children) of each variable in a data set,
then orient edges in the skeleton. However, existing MB or PC learning methods
are often computationally expensive especially with a large-sized BN, resulting
in inefficient local-to-global learning algorithms. To tackle the problem, in
this paper, we develop an efficient local-to-global learning approach using
feature selection. Specifically, we first analyze the rationale of the
well-known Minimum-Redundancy and Maximum-Relevance (MRMR) feature selection
approach for learning a PC set of a variable. Based on the analysis, we propose
an efficient F2SL (feature selection-based structure learning) approach to
local-to-global BN structure learning. The F2SL approach first employs the MRMR
approach to learn a DAG skeleton, then orients edges in the skeleton. Employing
independence tests or score functions for orienting edges, we instantiate the
F2SL approach into two new algorithms, F2SL-c (using independence tests) and
F2SL-s (using score functions). Compared to the state-of-the-art
local-to-global BN learning algorithms, the experiments validated that the
proposed algorithms in this paper are more efficient and provide competitive
structure learning quality than the compared algorithms.
Related papers
- A Structural-Clustering Based Active Learning for Graph Neural Networks [16.85038790429607]
We propose the Structural-Clustering PageRank method for improved Active learning (SPA) specifically designed for graph-structured data.
SPA integrates community detection using the SCAN algorithm with the PageRank scoring method for efficient and informative sample selection.
arXiv Detail & Related papers (2023-12-07T14:04:38Z) - Better Together: Using Multi-task Learning to Improve Feature Selection
within Structural Datasets [0.0]
This paper presents the use of multi-task learning (MTL) to provide automatic feature selection for a structural dataset.
The classification task is to differentiate between the port and starboard side of a tailplane, for samples from two aircraft of the same model.
The MTL results were interpretable, highlighting structural differences as opposed to differences in experimental set-up.
arXiv Detail & Related papers (2023-03-08T10:19:55Z) - Tightly Coupled Learning Strategy for Weakly Supervised Hierarchical
Place Recognition [0.09558392439655011]
We propose a tightly coupled learning (TCL) strategy to train triplet models.
It combines global and local descriptors for joint optimization.
Our lightweight unified model is better than several state-of-the-art methods.
arXiv Detail & Related papers (2022-02-14T03:20:39Z) - Local Learning Matters: Rethinking Data Heterogeneity in Federated
Learning [61.488646649045215]
Federated learning (FL) is a promising strategy for performing privacy-preserving, distributed learning with a network of clients (i.e., edge devices)
arXiv Detail & Related papers (2021-11-28T19:03:39Z) - Clustered Federated Learning via Generalized Total Variation
Minimization [83.26141667853057]
We study optimization methods to train local (or personalized) models for local datasets with a decentralized network structure.
Our main conceptual contribution is to formulate federated learning as total variation minimization (GTV)
Our main algorithmic contribution is a fully decentralized federated learning algorithm.
arXiv Detail & Related papers (2021-05-26T18:07:19Z) - Any Part of Bayesian Network Structure Learning [17.46459748913491]
We study an interesting and challenging problem, learning any part of a Bayesian network (BN) structure.
We first present a new concept of Expand-Backtracking to explain why local BN structure learning methods have the false edge orientation problem.
We then propose APSL, an efficient and accurate Any Part of BN Structure Learning algorithm.
arXiv Detail & Related papers (2021-03-23T10:03:31Z) - BCFNet: A Balanced Collaborative Filtering Network with Attention
Mechanism [106.43103176833371]
Collaborative Filtering (CF) based recommendation methods have been widely studied.
We propose a novel recommendation model named Balanced Collaborative Filtering Network (BCFNet)
In addition, an attention mechanism is designed to better capture the hidden information within implicit feedback and strengthen the learning ability of the neural network.
arXiv Detail & Related papers (2021-03-10T14:59:23Z) - Solving Mixed Integer Programs Using Neural Networks [57.683491412480635]
This paper applies learning to the two key sub-tasks of a MIP solver, generating a high-quality joint variable assignment, and bounding the gap in objective value between that assignment and an optimal one.
Our approach constructs two corresponding neural network-based components, Neural Diving and Neural Branching, to use in a base MIP solver such as SCIP.
We evaluate our approach on six diverse real-world datasets, including two Google production datasets and MIPLIB, by training separate neural networks on each.
arXiv Detail & Related papers (2020-12-23T09:33:11Z) - Fast Few-Shot Classification by Few-Iteration Meta-Learning [173.32497326674775]
We introduce a fast optimization-based meta-learning method for few-shot classification.
Our strategy enables important aspects of the base learner objective to be learned during meta-training.
We perform a comprehensive experimental analysis, demonstrating the speed and effectiveness of our approach.
arXiv Detail & Related papers (2020-10-01T15:59:31Z) - FedPD: A Federated Learning Framework with Optimal Rates and Adaptivity
to Non-IID Data [59.50904660420082]
Federated Learning (FL) has become a popular paradigm for learning from distributed data.
To effectively utilize data at different devices without moving them to the cloud, algorithms such as the Federated Averaging (FedAvg) have adopted a "computation then aggregation" (CTA) model.
arXiv Detail & Related papers (2020-05-22T23:07:42Z) - Reinforcement Learning for Variable Selection in a Branch and Bound
Algorithm [0.10499611180329801]
We leverage patterns in real-world instances to learn from scratch a new branching strategy optimised for a given problem.
We propose FMSTS, a novel Reinforcement Learning approach specifically designed for this task.
arXiv Detail & Related papers (2020-05-20T13:15:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.