Bayes Meets Entailment and Prediction: Commonsense Reasoning with
Non-monotonicity, Paraconsistency and Predictive Accuracy
- URL: http://arxiv.org/abs/2012.08479v3
- Date: Wed, 27 Jan 2021 18:13:00 GMT
- Title: Bayes Meets Entailment and Prediction: Commonsense Reasoning with
Non-monotonicity, Paraconsistency and Predictive Accuracy
- Authors: Hiroyuki Kido, Keishi Okamoto
- Abstract summary: We introduce a generative model of logical consequence relations.
It formalises the process of how the truth value of a sentence is probabilistically generated from the probability distribution over states of the world.
We show that the generative model gives a new classification algorithm that outperforms several representative algorithms in predictive accuracy and complexity on the Kaggle Titanic dataset.
- Score: 2.7412662946127755
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The recent success of Bayesian methods in neuroscience and artificial
intelligence gives rise to the hypothesis that the brain is a Bayesian machine.
Since logic and learning are both practices of the human brain, it leads to
another hypothesis that there is a Bayesian interpretation underlying both
logical reasoning and machine learning. In this paper, we introduce a
generative model of logical consequence relations. It formalises the process of
how the truth value of a sentence is probabilistically generated from the
probability distribution over states of the world. We show that the generative
model characterises a classical consequence relation, paraconsistent
consequence relation and nonmonotonic consequence relation. In particular, the
generative model gives a new consequence relation that outperforms them in
reasoning with inconsistent knowledge. We also show that the generative model
gives a new classification algorithm that outperforms several representative
algorithms in predictive accuracy and complexity on the Kaggle Titanic dataset.
Related papers
- Inference of Abstraction for a Unified Account of Reasoning and Learning [0.0]
We give a simple theory of probabilistic inference for a unified account of reasoning and learning.
We simply model how data cause symbolic knowledge in terms of its satisfiability in formal logic.
arXiv Detail & Related papers (2024-02-14T09:43:35Z) - Machine-Guided Discovery of a Real-World Rogue Wave Model [0.0]
We present a case study on discovering a new symbolic model for oceanic rogue waves from data using causal analysis, deep learning, parsimony-guided model selection, and symbolic regression.
We apply symbolic regression to distill this black-box model into a mathematical equation that retains the neural network's predictive capabilities.
This showcases how machine learning can facilitate inductive scientific discovery, and paves the way for more accurate rogue wave forecasting.
arXiv Detail & Related papers (2023-11-21T12:50:24Z) - Human Trajectory Forecasting with Explainable Behavioral Uncertainty [63.62824628085961]
Human trajectory forecasting helps to understand and predict human behaviors, enabling applications from social robots to self-driving cars.
Model-free methods offer superior prediction accuracy but lack explainability, while model-based methods provide explainability but cannot predict well.
We show that BNSP-SFM achieves up to a 50% improvement in prediction accuracy, compared with 11 state-of-the-art methods.
arXiv Detail & Related papers (2023-07-04T16:45:21Z) - A Simple Generative Model of Logical Reasoning and Statistical Learning [0.6853165736531939]
Statistical learning and logical reasoning are two major fields of AI expected to be unified for human-like machine intelligence.
We here propose a simple Bayesian model of logical reasoning and statistical learning.
We simply model how data causes symbolic knowledge in terms of its satisfiability in formal logic.
arXiv Detail & Related papers (2023-05-18T16:34:51Z) - Generative Logic with Time: Beyond Logical Consistency and Statistical
Possibility [0.6853165736531939]
We propose a temporal probabilistic model that generates symbolic knowledge from data.
The correctness of the model is justified in terms of consistency with Kolmogorov's axioms, Fenstad's theorems and maximum likelihood estimation.
arXiv Detail & Related papers (2023-01-20T10:55:49Z) - Neural Causal Models for Counterfactual Identification and Estimation [62.30444687707919]
We study the evaluation of counterfactual statements through neural models.
First, we show that neural causal models (NCMs) are expressive enough.
Second, we develop an algorithm for simultaneously identifying and estimating counterfactual distributions.
arXiv Detail & Related papers (2022-09-30T18:29:09Z) - The Causal Neural Connection: Expressiveness, Learnability, and
Inference [125.57815987218756]
An object called structural causal model (SCM) represents a collection of mechanisms and sources of random variation of the system under investigation.
In this paper, we show that the causal hierarchy theorem (Thm. 1, Bareinboim et al., 2020) still holds for neural models.
We introduce a special type of SCM called a neural causal model (NCM), and formalize a new type of inductive bias to encode structural constraints necessary for performing causal inferences.
arXiv Detail & Related papers (2021-07-02T01:55:18Z) - The Neural Coding Framework for Learning Generative Models [91.0357317238509]
We propose a novel neural generative model inspired by the theory of predictive processing in the brain.
In a similar way, artificial neurons in our generative model predict what neighboring neurons will do, and adjust their parameters based on how well the predictions matched reality.
arXiv Detail & Related papers (2020-12-07T01:20:38Z) - Causal Expectation-Maximisation [70.45873402967297]
We show that causal inference is NP-hard even in models characterised by polytree-shaped graphs.
We introduce the causal EM algorithm to reconstruct the uncertainty about the latent variables from data about categorical manifest variables.
We argue that there appears to be an unnoticed limitation to the trending idea that counterfactual bounds can often be computed without knowledge of the structural equations.
arXiv Detail & Related papers (2020-11-04T10:25:13Z) - Double Robust Representation Learning for Counterfactual Prediction [68.78210173955001]
We propose a novel scalable method to learn double-robust representations for counterfactual predictions.
We make robust and efficient counterfactual predictions for both individual and average treatment effects.
The algorithm shows competitive performance with the state-of-the-art on real world and synthetic data.
arXiv Detail & Related papers (2020-10-15T16:39:26Z) - Bayesian Entailment Hypothesis: How Brains Implement Monotonic and
Non-monotonic Reasoning [0.6853165736531939]
We give a Bayesian account of entailment and characterize its abstract inferential properties.
The preferential entailment, which is a representative non-monotonic consequence relation, is shown to be maximum a posteriori entailment.
We discuss merits of our proposals in terms of encoding preferences on defaults, handling change and contradiction, and modeling human entailment.
arXiv Detail & Related papers (2020-05-03T01:26:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.