A criterion for Artificial General Intelligence: hypothetic-deductive
reasoning, tested on ChatGPT
- URL: http://arxiv.org/abs/2308.02950v1
- Date: Sat, 5 Aug 2023 20:33:13 GMT
- Title: A criterion for Artificial General Intelligence: hypothetic-deductive
reasoning, tested on ChatGPT
- Authors: Louis Vervoort, Vitaliy Mizyakov, Anastasia Ugleva
- Abstract summary: We argue that a key reasoning skill that any advanced AI, say GPT-4, should master in order to qualify as 'thinking machine', or AGI, is hypothetic-deductive reasoning.
We propose simple tests for both types of reasoning, and apply them to ChatGPT.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We argue that a key reasoning skill that any advanced AI, say GPT-4, should
master in order to qualify as 'thinking machine', or AGI, is
hypothetic-deductive reasoning. Problem-solving or question-answering can quite
generally be construed as involving two steps: hypothesizing that a certain set
of hypotheses T applies to the problem or question at hand, and deducing the
solution or answer from T - hence the term hypothetic-deductive reasoning. An
elementary proxy of hypothetic-deductive reasoning is causal reasoning. We
propose simple tests for both types of reasoning, and apply them to ChatGPT.
Our study shows that, at present, the chatbot has a limited capacity for either
type of reasoning, as soon as the problems considered are somewhat complex.
However, we submit that if an AI would be capable of this type of reasoning in
a sufficiently wide range of contexts, it would be an AGI.
Related papers
- Can ChatGPT Make Explanatory Inferences? Benchmarks for Abductive Reasoning [0.0]
This paper proposes a set of benchmarks for assessing the ability of AI programs to perform explanatory inference.
Tests on the benchmarks reveal that ChatGPT performs creative and evaluative inferences in many domains.
Claims that ChatGPT and similar models are incapable of explanation, understanding, causal reasoning, meaning, and creativity are rebutted.
arXiv Detail & Related papers (2024-04-29T15:19:05Z) - Mitigating Misleading Chain-of-Thought Reasoning with Selective Filtering [59.495717939664246]
Large language models have manifested remarkable capabilities by leveraging chain-of-thought (CoT) reasoning techniques to solve intricate questions.
We propose a novel approach called the selective filtering reasoner (SelF-Reasoner) that assesses the entailment relationship between the question and the candidate reasoning chain.
SelF-Reasoner improves the fine-tuned T5 baseline consistently over the ScienceQA, ECQA, and LastLetter tasks.
arXiv Detail & Related papers (2024-03-28T06:28:35Z) - Language Models can be Logical Solvers [99.40649402395725]
We introduce LoGiPT, a novel language model that directly emulates the reasoning processes of logical solvers.
LoGiPT is fine-tuned on a newly constructed instruction-tuning dataset derived from revealing and refining the invisible reasoning process of deductive solvers.
arXiv Detail & Related papers (2023-11-10T16:23:50Z) - Implicit Chain of Thought Reasoning via Knowledge Distillation [58.80851216530288]
Instead of explicitly producing the chain of thought reasoning steps, we use the language model's internal hidden states to perform implicit reasoning.
We find that this approach enables solving tasks previously not solvable without explicit chain-of-thought, at a speed comparable to no chain-of-thought.
arXiv Detail & Related papers (2023-11-02T17:59:49Z) - Towards a Mechanistic Interpretation of Multi-Step Reasoning
Capabilities of Language Models [107.07851578154242]
Language models (LMs) have strong multi-step (i.e., procedural) reasoning capabilities.
It is unclear whether LMs perform tasks by cheating with answers memorized from pretraining corpus, or, via a multi-step reasoning mechanism.
We show that MechanisticProbe is able to detect the information of the reasoning tree from the model's attentions for most examples.
arXiv Detail & Related papers (2023-10-23T01:47:29Z) - Phenomenal Yet Puzzling: Testing Inductive Reasoning Capabilities of Language Models with Hypothesis Refinement [92.61557711360652]
Language models (LMs) often fall short on inductive reasoning, despite achieving impressive success on research benchmarks.
We conduct a systematic study of the inductive reasoning capabilities of LMs through iterative hypothesis refinement.
We reveal several discrepancies between the inductive reasoning processes of LMs and humans, shedding light on both the potentials and limitations of using LMs in inductive reasoning tasks.
arXiv Detail & Related papers (2023-10-12T17:51:10Z) - Impossibility Results in AI: A Survey [3.198144010381572]
An impossibility theorem demonstrates that a particular problem or set of problems cannot be solved as described in the claim.
We have categorized impossibility theorems applicable to the domain of AI into five categories: deduction, indistinguishability, induction, tradeoffs, and intractability.
We conclude that deductive impossibilities deny 100%-guarantees for security.
arXiv Detail & Related papers (2021-09-01T16:52:13Z) - Inductive Biases for Deep Learning of Higher-Level Cognition [108.89281493851358]
A fascinating hypothesis is that human and animal intelligence could be explained by a few principles.
This work considers a larger list, focusing on those which concern mostly higher-level and sequential conscious processing.
The objective of clarifying these particular principles is that they could potentially help us build AI systems benefiting from humans' abilities.
arXiv Detail & Related papers (2020-11-30T18:29:25Z) - A Defeasible Calculus for Zetetic Agents [0.0]
We show that zetetic norms can be modeled via defeasible inferences to and from questions.
We offer a sequent calculus that accommodates unique features of "erotetic defeat"
arXiv Detail & Related papers (2020-10-11T17:39:03Z) - Explaining AI as an Exploratory Process: The Peircean Abduction Model [0.2676349883103404]
Abductive inference has been defined in many ways.
Challenge of implementing abductive reasoning and the challenge of automating the explanation process are closely linked.
This analysis provides a theoretical framework for understanding what the XAI researchers are already doing.
arXiv Detail & Related papers (2020-09-30T17:10:37Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.