Towards Unifying Perceptual Reasoning and Logical Reasoning
- URL: http://arxiv.org/abs/2206.13174v1
- Date: Mon, 27 Jun 2022 10:32:47 GMT
- Title: Towards Unifying Perceptual Reasoning and Logical Reasoning
- Authors: Hiroyuki Kido
- Abstract summary: Recent study of logic presents a view of logical reasoning as Bayesian inference.
We show that the model unifies the two essential processes common in perceptual and logical systems.
- Score: 0.6853165736531939
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: An increasing number of scientific experiments support the view of perception
as Bayesian inference, which is rooted in Helmholtz's view of perception as
unconscious inference. Recent study of logic presents a view of logical
reasoning as Bayesian inference. In this paper, we give a simple probabilistic
model that is applicable to both perceptual reasoning and logical reasoning. We
show that the model unifies the two essential processes common in perceptual
and logical systems: on the one hand, the process by which perceptual and
logical knowledge is derived from another knowledge, and on the other hand, the
process by which such knowledge is derived from data. We fully characterise the
model in terms of logical consequence relations.
Related papers
- Neural Probabilistic Logic Learning for Knowledge Graph Reasoning [10.473897846826956]
This paper aims to design a reasoning framework that achieves accurate reasoning on knowledge graphs.
We introduce a scoring module that effectively enhances the expressive power of embedding networks.
We improve the interpretability of the model by incorporating a Markov Logic Network based on variational inference.
arXiv Detail & Related papers (2024-07-04T07:45:46Z) - Conceptual and Unbiased Reasoning in Language Models [98.90677711523645]
We propose a novel conceptualization framework that forces models to perform conceptual reasoning on abstract questions.
We show that existing large language models fall short on conceptual reasoning, dropping 9% to 28% on various benchmarks.
We then discuss how models can improve since high-level abstract reasoning is key to unbiased and generalizable decision-making.
arXiv Detail & Related papers (2024-03-30T00:53:53Z) - Inference of Abstraction for a Unified Account of Reasoning and Learning [0.0]
We give a simple theory of probabilistic inference for a unified account of reasoning and learning.
We simply model how data cause symbolic knowledge in terms of its satisfiability in formal logic.
arXiv Detail & Related papers (2024-02-14T09:43:35Z) - Crystal: Introspective Reasoners Reinforced with Self-Feedback [118.53428015478957]
We propose a novel method to develop an introspective commonsense reasoner, Crystal.
To tackle commonsense problems, it first introspects for knowledge statements related to the given question, and subsequently makes an informed prediction that is grounded in the previously introspected knowledge.
Experiments show that Crystal significantly outperforms both the standard supervised finetuning and chain-of-thought distilled methods, and enhances the transparency of the commonsense reasoning process.
arXiv Detail & Related papers (2023-10-07T21:23:58Z) - A Simple Generative Model of Logical Reasoning and Statistical Learning [0.6853165736531939]
Statistical learning and logical reasoning are two major fields of AI expected to be unified for human-like machine intelligence.
We here propose a simple Bayesian model of logical reasoning and statistical learning.
We simply model how data causes symbolic knowledge in terms of its satisfiability in formal logic.
arXiv Detail & Related papers (2023-05-18T16:34:51Z) - MetaLogic: Logical Reasoning Explanations with Fine-Grained Structure [129.8481568648651]
We propose a benchmark to investigate models' logical reasoning capabilities in complex real-life scenarios.
Based on the multi-hop chain of reasoning, the explanation form includes three main components.
We evaluate the current best models' performance on this new explanation form.
arXiv Detail & Related papers (2022-10-22T16:01:13Z) - A Quantitative Symbolic Approach to Individual Human Reasoning [0.0]
We take findings from the literature and show how these, formalized as cognitive principles within a logical framework, can establish a quantitative notion of reasoning.
We employ techniques from non-monotonic reasoning and computer science, namely, a solving paradigm called answer set programming (ASP)
Finally, we can fruitfully use plausibility reasoning in ASP to test the effects of an existing experiment and explain different majority responses.
arXiv Detail & Related papers (2022-05-10T16:43:47Z) - Observing Interventions: A logic for thinking about experiments [62.997667081978825]
This paper makes a first step towards a logic of learning from experiments.
Crucial for our approach is the idea that the notion of an intervention can be used as a formal expression of a (real or hypothetical) experiment.
For all the proposed logical systems, we provide a sound and complete axiomatization.
arXiv Detail & Related papers (2021-11-25T09:26:45Z) - A Qualitative Theory of Cognitive Attitudes and their Change [8.417971913040066]
We show that it allows us to express a variety of relevant concepts for qualitative decision theory.
We also present two extensions of the logic, one by the notion of choice and the other by dynamic operators for belief change and desire change.
arXiv Detail & Related papers (2021-02-16T10:28:49Z) - Thinking About Causation: A Causal Language with Epistemic Operators [58.720142291102135]
We extend the notion of a causal model with a representation of the state of an agent.
On the side of the object language, we add operators to express knowledge and the act of observing new information.
We provide a sound and complete axiomatization of the logic, and discuss the relation of this framework to causal team semantics.
arXiv Detail & Related papers (2020-10-30T12:16:45Z) - Logical Neural Networks [51.46602187496816]
We propose a novel framework seamlessly providing key properties of both neural nets (learning) and symbolic logic (knowledge and reasoning)
Every neuron has a meaning as a component of a formula in a weighted real-valued logic, yielding a highly intepretable disentangled representation.
Inference is omni rather than focused on predefined target variables, and corresponds to logical reasoning.
arXiv Detail & Related papers (2020-06-23T16:55:45Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.