Bayesian Entailment Hypothesis: How Brains Implement Monotonic and
Non-monotonic Reasoning
- URL: http://arxiv.org/abs/2005.00961v3
- Date: Wed, 27 Jan 2021 18:00:03 GMT
- Title: Bayesian Entailment Hypothesis: How Brains Implement Monotonic and
Non-monotonic Reasoning
- Authors: Hiroyuki Kido
- Abstract summary: We give a Bayesian account of entailment and characterize its abstract inferential properties.
The preferential entailment, which is a representative non-monotonic consequence relation, is shown to be maximum a posteriori entailment.
We discuss merits of our proposals in terms of encoding preferences on defaults, handling change and contradiction, and modeling human entailment.
- Score: 0.6853165736531939
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recent success of Bayesian methods in neuroscience and artificial
intelligence gives rise to the hypothesis that the brain is a Bayesian machine.
Since logic, as the laws of thought, is a product and practice of the human
brain, it leads to another hypothesis that there is a Bayesian algorithm and
data-structure for logical reasoning. In this paper, we give a Bayesian account
of entailment and characterize its abstract inferential properties. The
Bayesian entailment is shown to be a monotonic consequence relation in an
extreme case. In general, it is a sort of non-monotonic consequence relation
without Cautious monotony or Cut. The preferential entailment, which is a
representative non-monotonic consequence relation, is shown to be maximum a
posteriori entailment, which is an approximation of the Bayesian entailment. We
finally discuss merits of our proposals in terms of encoding preferences on
defaults, handling change and contradiction, and modeling human entailment.
Related papers
- Don't Cut Corners: Exact Conditions for Modularity in Biologically Inspired Representations [52.48094670415497]
We develop a theory of when biologically inspired representations modularise with respect to source variables (sources)
We derive necessary and sufficient conditions on a sample of sources that determine whether the neurons in an optimal biologically-inspired linear autoencoder modularise.
Our theory applies to any dataset, extending far beyond the case of statistical independence studied in previous work.
arXiv Detail & Related papers (2024-10-08T17:41:37Z) - Inference of Abstraction for a Unified Account of Reasoning and Learning [0.0]
We give a simple theory of probabilistic inference for a unified account of reasoning and learning.
We simply model how data cause symbolic knowledge in terms of its satisfiability in formal logic.
arXiv Detail & Related papers (2024-02-14T09:43:35Z) - A Simple Generative Model of Logical Reasoning and Statistical Learning [0.6853165736531939]
Statistical learning and logical reasoning are two major fields of AI expected to be unified for human-like machine intelligence.
We here propose a simple Bayesian model of logical reasoning and statistical learning.
We simply model how data causes symbolic knowledge in terms of its satisfiability in formal logic.
arXiv Detail & Related papers (2023-05-18T16:34:51Z) - Understanding Approximation for Bayesian Inference in Neural Networks [7.081604594416339]
I explore approximate inference in Bayesian neural networks.
The expected utility of the approximate posterior can measure inference quality.
Continual and active learning set-ups pose challenges that have nothing to do with posterior quality.
arXiv Detail & Related papers (2022-11-11T11:31:13Z) - Logical Satisfiability of Counterfactuals for Faithful Explanations in
NLI [60.142926537264714]
We introduce the methodology of Faithfulness-through-Counterfactuals.
It generates a counterfactual hypothesis based on the logical predicates expressed in the explanation.
It then evaluates if the model's prediction on the counterfactual is consistent with that expressed logic.
arXiv Detail & Related papers (2022-05-25T03:40:59Z) - Observing Interventions: A logic for thinking about experiments [62.997667081978825]
This paper makes a first step towards a logic of learning from experiments.
Crucial for our approach is the idea that the notion of an intervention can be used as a formal expression of a (real or hypothetical) experiment.
For all the proposed logical systems, we provide a sound and complete axiomatization.
arXiv Detail & Related papers (2021-11-25T09:26:45Z) - Bayes Meets Entailment and Prediction: Commonsense Reasoning with
Non-monotonicity, Paraconsistency and Predictive Accuracy [2.7412662946127755]
We introduce a generative model of logical consequence relations.
It formalises the process of how the truth value of a sentence is probabilistically generated from the probability distribution over states of the world.
We show that the generative model gives a new classification algorithm that outperforms several representative algorithms in predictive accuracy and complexity on the Kaggle Titanic dataset.
arXiv Detail & Related papers (2020-12-15T18:22:27Z) - Causal Expectation-Maximisation [70.45873402967297]
We show that causal inference is NP-hard even in models characterised by polytree-shaped graphs.
We introduce the causal EM algorithm to reconstruct the uncertainty about the latent variables from data about categorical manifest variables.
We argue that there appears to be an unnoticed limitation to the trending idea that counterfactual bounds can often be computed without knowledge of the structural equations.
arXiv Detail & Related papers (2020-11-04T10:25:13Z) - From Checking to Inference: Actual Causality Computations as
Optimization Problems [79.87179017975235]
We present a novel approach to formulate different notions of causal reasoning, over binary acyclic models, as optimization problems.
We show that both notions are efficiently automated. Using models with more than $8000$ variables, checking is computed in a matter of seconds, with MaxSAT outperforming ILP in many cases.
arXiv Detail & Related papers (2020-06-05T10:56:52Z) - A non-commutative Bayes' theorem [0.0]
We prove an analogue of Bayes' theorem in the joint classical and quantum context.
We further develop non-commutative almost everywhere equivalence.
We illustrate how the procedure works for several examples relevant to quantum information theory.
arXiv Detail & Related papers (2020-05-08T07:51:01Z) - Invariant Rationalization [84.1861516092232]
A typical rationalization criterion, i.e. maximum mutual information (MMI), finds the rationale that maximizes the prediction performance based only on the rationale.
We introduce a game-theoretic invariant rationalization criterion where the rationales are constrained to enable the same predictor to be optimal across different environments.
We show both theoretically and empirically that the proposed rationales can rule out spurious correlations, generalize better to different test scenarios, and align better with human judgments.
arXiv Detail & Related papers (2020-03-22T00:50:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.