Towards Unifying Logical Entailment and Statistical Estimation
- URL: http://arxiv.org/abs/2202.13406v1
- Date: Sun, 27 Feb 2022 17:51:35 GMT
- Title: Towards Unifying Logical Entailment and Statistical Estimation
- Authors: Hiroyuki Kido
- Abstract summary: This paper gives a generative model of the interpretation of formal logic for data-driven logical reasoning.
It is shown that the generative model is a unified theory of several different types of reasoning in logic and statistics.
- Score: 0.6853165736531939
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This paper gives a generative model of the interpretation of formal logic for
data-driven logical reasoning. The key idea is to represent the interpretation
as likelihood of a formula being true given a model of formal logic. Using the
likelihood, Bayes' theorem gives the posterior of the model being the case
given the formula. The posterior represents an inverse interpretation of formal
logic that seeks models making the formula true. The likelihood and posterior
cause Bayesian learning that gives the probability of the conclusion being true
in the models where all the premises are true. This paper looks at statistical
and logical properties of the Bayesian learning. It is shown that the
generative model is a unified theory of several different types of reasoning in
logic and statistics.
Related papers
- Lean-STaR: Learning to Interleave Thinking and Proving [53.923617816215774]
We present Lean-STaR, a framework for training language models to produce informal thoughts prior to each step of a proof.
Lean-STaR achieves state-of-the-art results on the miniF2F-test benchmark within the Lean theorem proving environment.
arXiv Detail & Related papers (2024-07-14T01:43:07Z) - On the Origins of Linear Representations in Large Language Models [51.88404605700344]
We introduce a simple latent variable model to formalize the concept dynamics of the next token prediction.
Experiments show that linear representations emerge when learning from data matching the latent variable model.
We additionally confirm some predictions of the theory using the LLaMA-2 large language model.
arXiv Detail & Related papers (2024-03-06T17:17:36Z) - Inference of Abstraction for a Unified Account of Reasoning and Learning [0.0]
We give a simple theory of probabilistic inference for a unified account of reasoning and learning.
We simply model how data cause symbolic knowledge in terms of its satisfiability in formal logic.
arXiv Detail & Related papers (2024-02-14T09:43:35Z) - A Simple Generative Model of Logical Reasoning and Statistical Learning [0.6853165736531939]
Statistical learning and logical reasoning are two major fields of AI expected to be unified for human-like machine intelligence.
We here propose a simple Bayesian model of logical reasoning and statistical learning.
We simply model how data causes symbolic knowledge in terms of its satisfiability in formal logic.
arXiv Detail & Related papers (2023-05-18T16:34:51Z) - Causal models in string diagrams [0.0]
The framework of causal models provides a principled approach to causal reasoning, applied today across many scientific domains.
We present this framework in the language of string diagrams, interpreted formally using category theory.
We argue and demonstrate that causal reasoning according to the causal model framework is most naturally and intuitively done as diagrammatic reasoning.
arXiv Detail & Related papers (2023-04-15T21:54:48Z) - Generative Logic with Time: Beyond Logical Consistency and Statistical
Possibility [0.6853165736531939]
We propose a temporal probabilistic model that generates symbolic knowledge from data.
The correctness of the model is justified in terms of consistency with Kolmogorov's axioms, Fenstad's theorems and maximum likelihood estimation.
arXiv Detail & Related papers (2023-01-20T10:55:49Z) - MetaLogic: Logical Reasoning Explanations with Fine-Grained Structure [129.8481568648651]
We propose a benchmark to investigate models' logical reasoning capabilities in complex real-life scenarios.
Based on the multi-hop chain of reasoning, the explanation form includes three main components.
We evaluate the current best models' performance on this new explanation form.
arXiv Detail & Related papers (2022-10-22T16:01:13Z) - Logical Satisfiability of Counterfactuals for Faithful Explanations in
NLI [60.142926537264714]
We introduce the methodology of Faithfulness-through-Counterfactuals.
It generates a counterfactual hypothesis based on the logical predicates expressed in the explanation.
It then evaluates if the model's prediction on the counterfactual is consistent with that expressed logic.
arXiv Detail & Related papers (2022-05-25T03:40:59Z) - Logical Credal Networks [87.25387518070411]
This paper introduces Logical Credal Networks, an expressive probabilistic logic that generalizes many prior models that combine logic and probability.
We investigate its performance on maximum a posteriori inference tasks, including solving Mastermind games with uncertainty and detecting credit card fraud.
arXiv Detail & Related papers (2021-09-25T00:00:47Z) - Bayes Meets Entailment and Prediction: Commonsense Reasoning with
Non-monotonicity, Paraconsistency and Predictive Accuracy [2.7412662946127755]
We introduce a generative model of logical consequence relations.
It formalises the process of how the truth value of a sentence is probabilistically generated from the probability distribution over states of the world.
We show that the generative model gives a new classification algorithm that outperforms several representative algorithms in predictive accuracy and complexity on the Kaggle Titanic dataset.
arXiv Detail & Related papers (2020-12-15T18:22:27Z) - Logical Neural Networks [51.46602187496816]
We propose a novel framework seamlessly providing key properties of both neural nets (learning) and symbolic logic (knowledge and reasoning)
Every neuron has a meaning as a component of a formula in a weighted real-valued logic, yielding a highly intepretable disentangled representation.
Inference is omni rather than focused on predefined target variables, and corresponds to logical reasoning.
arXiv Detail & Related papers (2020-06-23T16:55:45Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.