A Simple Generative Model of Logical Reasoning and Statistical Learning
- URL: http://arxiv.org/abs/2305.11098v1
- Date: Thu, 18 May 2023 16:34:51 GMT
- Title: A Simple Generative Model of Logical Reasoning and Statistical Learning
- Authors: Hiroyuki Kido
- Abstract summary: Statistical learning and logical reasoning are two major fields of AI expected to be unified for human-like machine intelligence.
We here propose a simple Bayesian model of logical reasoning and statistical learning.
We simply model how data causes symbolic knowledge in terms of its satisfiability in formal logic.
- Score: 0.6853165736531939
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Statistical learning and logical reasoning are two major fields of AI
expected to be unified for human-like machine intelligence. Most existing work
considers how to combine existing logical and statistical systems. However,
there is no theory of inference so far explaining how basic approaches to
statistical learning and logical reasoning stem from a common principle.
Inspired by the fact that much empirical work in neuroscience suggests Bayesian
(or probabilistic generative) approaches to brain function including learning
and reasoning, we here propose a simple Bayesian model of logical reasoning and
statistical learning. The theory is statistically correct as it satisfies
Kolmogorov's axioms, is consistent with both Fenstad's representation theorem
and maximum likelihood estimation and performs exact Bayesian inference with a
linear-time complexity. The theory is logically correct as it is a data-driven
generalisation of uncertain reasoning from consistency, possibility,
inconsistency and impossibility. The theory is correct in terms of machine
learning as its solution to generation and prediction tasks on the MNIST
dataset is not only empirically reasonable but also theoretically correct
against the K nearest neighbour method. We simply model how data causes
symbolic knowledge in terms of its satisfiability in formal logic. Symbolic
reasoning emerges as a result of the process of going the causality forwards
and backwards. The forward and backward processes correspond to an
interpretation and inverse interpretation in formal logic, respectively. The
inverse interpretation differentiates our work from the mainstream often
referred to as inverse entailment, inverse deduction or inverse resolution. The
perspective gives new insights into learning and reasoning towards human-like
machine intelligence.
Related papers
- Inference of Abstraction for a Unified Account of Reasoning and Learning [0.0]
We give a simple theory of probabilistic inference for a unified account of reasoning and learning.
We simply model how data cause symbolic knowledge in terms of its satisfiability in formal logic.
arXiv Detail & Related papers (2024-02-14T09:43:35Z) - Understanding Reasoning Ability of Language Models From the Perspective of Reasoning Paths Aggregation [110.71955853831707]
We view LMs as deriving new conclusions by aggregating indirect reasoning paths seen at pre-training time.
We formalize the reasoning paths as random walk paths on the knowledge/reasoning graphs.
Experiments and analysis on multiple KG and CoT datasets reveal the effect of training on random walk paths.
arXiv Detail & Related papers (2024-02-05T18:25:51Z) - Generative Logic with Time: Beyond Logical Consistency and Statistical
Possibility [0.6853165736531939]
We propose a temporal probabilistic model that generates symbolic knowledge from data.
The correctness of the model is justified in terms of consistency with Kolmogorov's axioms, Fenstad's theorems and maximum likelihood estimation.
arXiv Detail & Related papers (2023-01-20T10:55:49Z) - MetaLogic: Logical Reasoning Explanations with Fine-Grained Structure [129.8481568648651]
We propose a benchmark to investigate models' logical reasoning capabilities in complex real-life scenarios.
Based on the multi-hop chain of reasoning, the explanation form includes three main components.
We evaluate the current best models' performance on this new explanation form.
arXiv Detail & Related papers (2022-10-22T16:01:13Z) - Towards Unifying Perceptual Reasoning and Logical Reasoning [0.6853165736531939]
Recent study of logic presents a view of logical reasoning as Bayesian inference.
We show that the model unifies the two essential processes common in perceptual and logical systems.
arXiv Detail & Related papers (2022-06-27T10:32:47Z) - Logical Satisfiability of Counterfactuals for Faithful Explanations in
NLI [60.142926537264714]
We introduce the methodology of Faithfulness-through-Counterfactuals.
It generates a counterfactual hypothesis based on the logical predicates expressed in the explanation.
It then evaluates if the model's prediction on the counterfactual is consistent with that expressed logic.
arXiv Detail & Related papers (2022-05-25T03:40:59Z) - On the Paradox of Learning to Reason from Data [86.13662838603761]
We show that BERT can attain near-perfect accuracy on in-distribution test examples while failing to generalize to other data distributions over the exact same problem space.
Our study provides an explanation for this paradox: instead of learning to emulate the correct reasoning function, BERT has in fact learned statistical features that inherently exist in logical reasoning problems.
arXiv Detail & Related papers (2022-05-23T17:56:48Z) - Towards Unifying Logical Entailment and Statistical Estimation [0.6853165736531939]
This paper gives a generative model of the interpretation of formal logic for data-driven logical reasoning.
It is shown that the generative model is a unified theory of several different types of reasoning in logic and statistics.
arXiv Detail & Related papers (2022-02-27T17:51:35Z) - Logical Credal Networks [87.25387518070411]
This paper introduces Logical Credal Networks, an expressive probabilistic logic that generalizes many prior models that combine logic and probability.
We investigate its performance on maximum a posteriori inference tasks, including solving Mastermind games with uncertainty and detecting credit card fraud.
arXiv Detail & Related papers (2021-09-25T00:00:47Z) - Bayes Meets Entailment and Prediction: Commonsense Reasoning with
Non-monotonicity, Paraconsistency and Predictive Accuracy [2.7412662946127755]
We introduce a generative model of logical consequence relations.
It formalises the process of how the truth value of a sentence is probabilistically generated from the probability distribution over states of the world.
We show that the generative model gives a new classification algorithm that outperforms several representative algorithms in predictive accuracy and complexity on the Kaggle Titanic dataset.
arXiv Detail & Related papers (2020-12-15T18:22:27Z) - Logical Neural Networks [51.46602187496816]
We propose a novel framework seamlessly providing key properties of both neural nets (learning) and symbolic logic (knowledge and reasoning)
Every neuron has a meaning as a component of a formula in a weighted real-valued logic, yielding a highly intepretable disentangled representation.
Inference is omni rather than focused on predefined target variables, and corresponds to logical reasoning.
arXiv Detail & Related papers (2020-06-23T16:55:45Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.