Rule-based Evolutionary Bayesian Learning
- URL: http://arxiv.org/abs/2202.13778v1
- Date: Mon, 28 Feb 2022 13:24:00 GMT
- Title: Rule-based Evolutionary Bayesian Learning
- Authors: Themistoklis Botsas, Lachlan R. Mason, Omar K. Matar, Indranil Pan
- Abstract summary: We extend the rule-based Bayesian Regression methodology with grammatical evolution.
Our motivation is that grammatical evolution can potentially detect patterns from the data with valuable information, equivalent to that of expert knowledge.
We illustrate the use of the rule-based Bayesian Evolutionary learning technique by applying it to synthetic as well as real data, and examine the results in terms of point predictions and associated uncertainty.
- Score: 0.802904964931021
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In our previous work, we introduced the rule-based Bayesian Regression, a
methodology that leverages two concepts: (i) Bayesian inference, for the
general framework and uncertainty quantification and (ii) rule-based systems
for the incorporation of expert knowledge and intuition. The resulting method
creates a penalty equivalent to a common Bayesian prior, but it also includes
information that typically would not be available within a standard Bayesian
context. In this work, we extend the aforementioned methodology with
grammatical evolution, a symbolic genetic programming technique that we utilise
for automating the rules' derivation. Our motivation is that grammatical
evolution can potentially detect patterns from the data with valuable
information, equivalent to that of expert knowledge. We illustrate the use of
the rule-based Evolutionary Bayesian learning technique by applying it to
synthetic as well as real data, and examine the results in terms of point
predictions and associated uncertainty.
Related papers
- Deep Learning: A Tutorial [0.8158530638728498]
We provide a review of deep learning methods which provide insight into structured high-dimensional data.
Deep learning uses layers of semi-affine input transformations to provide a predictive rule.
Applying these layers of transformations leads to a set of attributes (or, features) to which probabilistic statistical methods can be applied.
arXiv Detail & Related papers (2023-10-10T01:55:22Z) - Causal Discovery with Language Models as Imperfect Experts [119.22928856942292]
We consider how expert knowledge can be used to improve the data-driven identification of causal graphs.
We propose strategies for amending such expert knowledge based on consistency properties.
We report a case study, on real data, where a large language model is used as an imperfect expert.
arXiv Detail & Related papers (2023-07-05T16:01:38Z) - Neural-based classification rule learning for sequential data [0.0]
We propose a novel differentiable fully interpretable method to discover both local and global patterns for rule-based binary classification.
It consists of a convolutional binary neural network with an interpretable neural filter and a training strategy based on dynamically-enforced sparsity.
We demonstrate the validity and usefulness of the approach on synthetic datasets and on an open-source peptides dataset.
arXiv Detail & Related papers (2023-02-22T11:05:05Z) - Toward Learning Robust and Invariant Representations with Alignment
Regularization and Data Augmentation [76.85274970052762]
This paper is motivated by a proliferation of options of alignment regularizations.
We evaluate the performances of several popular design choices along the dimensions of robustness and invariance.
We also formally analyze the behavior of alignment regularization to complement our empirical study under assumptions we consider realistic.
arXiv Detail & Related papers (2022-06-04T04:29:19Z) - Principled Knowledge Extrapolation with GANs [92.62635018136476]
We study counterfactual synthesis from a new perspective of knowledge extrapolation.
We show that an adversarial game with a closed-form discriminator can be used to address the knowledge extrapolation problem.
Our method enjoys both elegant theoretical guarantees and superior performance in many scenarios.
arXiv Detail & Related papers (2022-05-21T08:39:42Z) - Does Pre-training Induce Systematic Inference? How Masked Language
Models Acquire Commonsense Knowledge [91.15301779076187]
We introduce verbalized knowledge into the minibatches of a BERT model during pre-training and evaluate how well the model generalizes to supported inferences.
We find generalization does not improve over the course of pre-training, suggesting that commonsense knowledge is acquired from surface-level, co-occurrence patterns rather than induced, systematic reasoning.
arXiv Detail & Related papers (2021-12-16T03:13:04Z) - Information Theoretic Meta Learning with Gaussian Processes [74.54485310507336]
We formulate meta learning using information theoretic concepts; namely, mutual information and the information bottleneck.
By making use of variational approximations to the mutual information, we derive a general and tractable framework for meta learning.
arXiv Detail & Related papers (2020-09-07T16:47:30Z) - Explainable AI for Classification using Probabilistic Logic Inference [9.656846523452502]
We present an explainable classification method.
Our method works by first constructing a symbolic Knowledge Base from the training data, and then performing probabilistic inferences on such Knowledge Base with linear programming.
It identifies decisive features that are responsible for a classification as explanations and produces results similar to the ones found by SHAP, a state of the artley Value based method.
arXiv Detail & Related papers (2020-05-05T11:39:23Z) - Neuro-symbolic Architectures for Context Understanding [59.899606495602406]
We propose the use of hybrid AI methodology as a framework for combining the strengths of data-driven and knowledge-driven approaches.
Specifically, we inherit the concept of neuro-symbolism as a way of using knowledge-bases to guide the learning progress of deep neural networks.
arXiv Detail & Related papers (2020-03-09T15:04:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.