Improving Bayesian Network Structure Learning in the Presence of
Measurement Error
- URL: http://arxiv.org/abs/2011.09776v1
- Date: Thu, 19 Nov 2020 11:27:47 GMT
- Title: Improving Bayesian Network Structure Learning in the Presence of
Measurement Error
- Authors: Yang Liu, Anthony C. Constantinou, ZhiGao Guo
- Abstract summary: This paper describes an algorithm that can be added as an additional learning phase at the end of any structure learning algorithm.
The proposed correction algorithm successfully improves the graphical score of four well-established structure learning algorithms.
- Score: 11.103936437655575
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Structure learning algorithms that learn the graph of a Bayesian network from
observational data often do so by assuming the data correctly reflect the true
distribution of the variables. However, this assumption does not hold in the
presence of measurement error, which can lead to spurious edges. This is one of
the reasons why the synthetic performance of these algorithms often
overestimates real-world performance. This paper describes an algorithm that
can be added as an additional learning phase at the end of any structure
learning algorithm, and serves as a correction learning phase that removes
potential false positive edges. The results show that the proposed correction
algorithm successfully improves the graphical score of four well-established
structure learning algorithms spanning different classes of learning in the
presence of measurement error.
Related papers
- A Mirror Descent-Based Algorithm for Corruption-Tolerant Distributed Gradient Descent [57.64826450787237]
We show how to analyze the behavior of distributed gradient descent algorithms in the presence of adversarial corruptions.
We show how to use ideas from (lazy) mirror descent to design a corruption-tolerant distributed optimization algorithm.
Experiments based on linear regression, support vector classification, and softmax classification on the MNIST dataset corroborate our theoretical findings.
arXiv Detail & Related papers (2024-07-19T08:29:12Z) - Improved Graph-based semi-supervised learning Schemes [0.0]
In this work, we improve the accuracy of several known algorithms to address the classification of large datasets when few labels are available.
Our framework lies in the realm of graph-based semi-supervised learning.
arXiv Detail & Related papers (2024-06-30T16:50:08Z) - Structured Prediction in Online Learning [66.36004256710824]
We study a theoretical and algorithmic framework for structured prediction in the online learning setting.
We show that our algorithm is a generalisation of optimal algorithms from the supervised learning setting.
We consider a second algorithm designed especially for non-stationary data distributions, including adversarial data.
arXiv Detail & Related papers (2024-06-18T07:45:02Z) - The Cascaded Forward Algorithm for Neural Network Training [61.06444586991505]
We propose a new learning framework for neural networks, namely Cascaded Forward (CaFo) algorithm, which does not rely on BP optimization as that in FF.
Unlike FF, our framework directly outputs label distributions at each cascaded block, which does not require generation of additional negative samples.
In our framework each block can be trained independently, so it can be easily deployed into parallel acceleration systems.
arXiv Detail & Related papers (2023-03-17T02:01:11Z) - Towards Diverse Evaluation of Class Incremental Learning: A Representation Learning Perspective [67.45111837188685]
Class incremental learning (CIL) algorithms aim to continually learn new object classes from incrementally arriving data.
We experimentally analyze neural network models trained by CIL algorithms using various evaluation protocols in representation learning.
arXiv Detail & Related papers (2022-06-16T11:44:11Z) - Refining neural network predictions using background knowledge [68.35246878394702]
We show we can use logical background knowledge in learning system to compensate for a lack of labeled training data.
We introduce differentiable refinement functions that find a corrected prediction close to the original prediction.
This algorithm finds optimal refinements on complex SAT formulas in significantly fewer iterations and frequently finds solutions where gradient descent can not.
arXiv Detail & Related papers (2022-06-10T10:17:59Z) - Hybrid Bayesian network discovery with latent variables by scoring
multiple interventions [5.994412766684843]
We present the hybrid mFGS-BS (majority rule and Fast Greedy equivalence Search with Bayesian Scoring) algorithm for structure learning from discrete data.
The algorithm assumes causal insufficiency in the presence of latent variables and produces a Partial Ancestral Graph (PAG)
Experimental results show that mFGS-BS improves structure learning accuracy relative to the state-of-the-art and it is computationally efficient.
arXiv Detail & Related papers (2021-12-20T14:54:41Z) - Demystifying Deep Neural Networks Through Interpretation: A Survey [3.566184392528658]
Modern deep learning algorithms tend to optimize an objective metric, such as minimize a cross entropy loss on a training dataset, to be able to learn.
The problem is that the single metric is an incomplete description of the real world tasks.
There are works done to tackle the problem of interpretability to provide insights into neural networks behavior and thought process.
arXiv Detail & Related papers (2020-12-13T17:56:41Z) - Provably Robust Metric Learning [98.50580215125142]
We show that existing metric learning algorithms can result in metrics that are less robust than the Euclidean distance.
We propose a novel metric learning algorithm to find a Mahalanobis distance that is robust against adversarial perturbations.
Experimental results show that the proposed metric learning algorithm improves both certified robust errors and empirical robust errors.
arXiv Detail & Related papers (2020-06-12T09:17:08Z) - Large-scale empirical validation of Bayesian Network structure learning
algorithms with noisy data [9.04391541965756]
This paper investigates the performance of 15 structure learning algorithms.
Each algorithm is tested over multiple case studies, sample sizes, types of noise, and assessed with multiple evaluation criteria.
Results suggest traditional synthetic performance may overestimate real-world performance by anywhere between 10% and more than 50%.
arXiv Detail & Related papers (2020-05-18T18:40:09Z) - An Empirical Study of Incremental Learning in Neural Network with Noisy
Training Set [0.0]
We numerically show that the accuracy of the algorithm is dependent more on the location of the error than the percentage of error.
Results show that the dependence of the accuracy with the location of error is independent of the algorithm.
arXiv Detail & Related papers (2020-05-07T06:09:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.