Inference of Abstraction for a Unified Account of Reasoning and Learning
- URL: http://arxiv.org/abs/2402.09046v1
- Date: Wed, 14 Feb 2024 09:43:35 GMT
- Title: Inference of Abstraction for a Unified Account of Reasoning and Learning
- Authors: Hiroyuki Kido
- Abstract summary: We give a simple theory of probabilistic inference for a unified account of reasoning and learning.
We simply model how data cause symbolic knowledge in terms of its satisfiability in formal logic.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Inspired by Bayesian approaches to brain function in neuroscience, we give a
simple theory of probabilistic inference for a unified account of reasoning and
learning. We simply model how data cause symbolic knowledge in terms of its
satisfiability in formal logic. The underlying idea is that reasoning is a
process of deriving symbolic knowledge from data via abstraction, i.e.,
selective ignorance. The logical consequence relation is discussed for its
proof-based theoretical correctness. The MNIST dataset is discussed for its
experiment-based empirical correctness.
Related papers
- Inference of Abstraction for a Unified Account of Symbolic Reasoning
from Data [0.0]
We give a unified probabilistic account of various types of symbolic reasoning from data.
The theory gives new insights into reasoning towards human-like machine intelligence.
arXiv Detail & Related papers (2024-02-13T18:24:23Z) - Understanding Reasoning Ability of Language Models From the Perspective of Reasoning Paths Aggregation [110.71955853831707]
We view LMs as deriving new conclusions by aggregating indirect reasoning paths seen at pre-training time.
We formalize the reasoning paths as random walk paths on the knowledge/reasoning graphs.
Experiments and analysis on multiple KG and CoT datasets reveal the effect of training on random walk paths.
arXiv Detail & Related papers (2024-02-05T18:25:51Z) - Neural Causal Abstractions [63.21695740637627]
We develop a new family of causal abstractions by clustering variables and their domains.
We show that such abstractions are learnable in practical settings through Neural Causal Models.
Our experiments support the theory and illustrate how to scale causal inferences to high-dimensional settings involving image data.
arXiv Detail & Related papers (2024-01-05T02:00:27Z) - A Simple Generative Model of Logical Reasoning and Statistical Learning [0.6853165736531939]
Statistical learning and logical reasoning are two major fields of AI expected to be unified for human-like machine intelligence.
We here propose a simple Bayesian model of logical reasoning and statistical learning.
We simply model how data causes symbolic knowledge in terms of its satisfiability in formal logic.
arXiv Detail & Related papers (2023-05-18T16:34:51Z) - Generative Logic with Time: Beyond Logical Consistency and Statistical
Possibility [0.6853165736531939]
We propose a temporal probabilistic model that generates symbolic knowledge from data.
The correctness of the model is justified in terms of consistency with Kolmogorov's axioms, Fenstad's theorems and maximum likelihood estimation.
arXiv Detail & Related papers (2023-01-20T10:55:49Z) - MetaLogic: Logical Reasoning Explanations with Fine-Grained Structure [129.8481568648651]
We propose a benchmark to investigate models' logical reasoning capabilities in complex real-life scenarios.
Based on the multi-hop chain of reasoning, the explanation form includes three main components.
We evaluate the current best models' performance on this new explanation form.
arXiv Detail & Related papers (2022-10-22T16:01:13Z) - Towards Unifying Perceptual Reasoning and Logical Reasoning [0.6853165736531939]
Recent study of logic presents a view of logical reasoning as Bayesian inference.
We show that the model unifies the two essential processes common in perceptual and logical systems.
arXiv Detail & Related papers (2022-06-27T10:32:47Z) - On the Paradox of Learning to Reason from Data [86.13662838603761]
We show that BERT can attain near-perfect accuracy on in-distribution test examples while failing to generalize to other data distributions over the exact same problem space.
Our study provides an explanation for this paradox: instead of learning to emulate the correct reasoning function, BERT has in fact learned statistical features that inherently exist in logical reasoning problems.
arXiv Detail & Related papers (2022-05-23T17:56:48Z) - Observing Interventions: A logic for thinking about experiments [62.997667081978825]
This paper makes a first step towards a logic of learning from experiments.
Crucial for our approach is the idea that the notion of an intervention can be used as a formal expression of a (real or hypothetical) experiment.
For all the proposed logical systems, we provide a sound and complete axiomatization.
arXiv Detail & Related papers (2021-11-25T09:26:45Z) - Logical Neural Networks [51.46602187496816]
We propose a novel framework seamlessly providing key properties of both neural nets (learning) and symbolic logic (knowledge and reasoning)
Every neuron has a meaning as a component of a formula in a weighted real-valued logic, yielding a highly intepretable disentangled representation.
Inference is omni rather than focused on predefined target variables, and corresponds to logical reasoning.
arXiv Detail & Related papers (2020-06-23T16:55:45Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.