A criterion for Artificial General Intelligence: hypothetic-deductive
reasoning, tested on ChatGPT
- URL: http://arxiv.org/abs/2308.02950v1
- Date: Sat, 5 Aug 2023 20:33:13 GMT
- Title: A criterion for Artificial General Intelligence: hypothetic-deductive
reasoning, tested on ChatGPT
- Authors: Louis Vervoort, Vitaliy Mizyakov, Anastasia Ugleva
- Abstract summary: We argue that a key reasoning skill that any advanced AI, say GPT-4, should master in order to qualify as 'thinking machine', or AGI, is hypothetic-deductive reasoning.
We propose simple tests for both types of reasoning, and apply them to ChatGPT.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We argue that a key reasoning skill that any advanced AI, say GPT-4, should
master in order to qualify as 'thinking machine', or AGI, is
hypothetic-deductive reasoning. Problem-solving or question-answering can quite
generally be construed as involving two steps: hypothesizing that a certain set
of hypotheses T applies to the problem or question at hand, and deducing the
solution or answer from T - hence the term hypothetic-deductive reasoning. An
elementary proxy of hypothetic-deductive reasoning is causal reasoning. We
propose simple tests for both types of reasoning, and apply them to ChatGPT.
Our study shows that, at present, the chatbot has a limited capacity for either
type of reasoning, as soon as the problems considered are somewhat complex.
However, we submit that if an AI would be capable of this type of reasoning in
a sufficiently wide range of contexts, it would be an AGI.
Related papers
- On the consistent reasoning paradox of intelligence and optimal trust in AI: The power of 'I don't know' [79.69412622010249]
Consistent reasoning, which lies at the core of human intelligence, is the ability to handle tasks that are equivalent.
CRP asserts that consistent reasoning implies fallibility -- in particular, human-like intelligence in AI necessarily comes with human-like fallibility.
arXiv Detail & Related papers (2024-08-05T10:06:53Z) - Can ChatGPT Make Explanatory Inferences? Benchmarks for Abductive Reasoning [0.0]
This paper proposes a set of benchmarks for assessing the ability of AI programs to perform explanatory inference.
Tests on the benchmarks reveal that ChatGPT performs creative and evaluative inferences in many domains.
Claims that ChatGPT and similar models are incapable of explanation, understanding, causal reasoning, meaning, and creativity are rebutted.
arXiv Detail & Related papers (2024-04-29T15:19:05Z) - Mitigating Misleading Chain-of-Thought Reasoning with Selective Filtering [59.495717939664246]
Large language models have manifested remarkable capabilities by leveraging chain-of-thought (CoT) reasoning techniques to solve intricate questions.
We propose a novel approach called the selective filtering reasoner (SelF-Reasoner) that assesses the entailment relationship between the question and the candidate reasoning chain.
SelF-Reasoner improves the fine-tuned T5 baseline consistently over the ScienceQA, ECQA, and LastLetter tasks.
arXiv Detail & Related papers (2024-03-28T06:28:35Z) - When Is Inductive Inference Possible? [3.4991031406102238]
We provide a tight characterization of inductive inference by establishing a novel link to online learning theory.
We prove that inductive inference is possible if and only if the hypothesis class is a countable union of online learnable classes.
Our main technical tool is a novel non-uniform online learning framework, which may be of independent interest.
arXiv Detail & Related papers (2023-11-30T20:02:25Z) - Language Models can be Logical Solvers [99.40649402395725]
We introduce LoGiPT, a novel language model that directly emulates the reasoning processes of logical solvers.
LoGiPT is fine-tuned on a newly constructed instruction-tuning dataset derived from revealing and refining the invisible reasoning process of deductive solvers.
arXiv Detail & Related papers (2023-11-10T16:23:50Z) - Implicit Chain of Thought Reasoning via Knowledge Distillation [58.80851216530288]
Instead of explicitly producing the chain of thought reasoning steps, we use the language model's internal hidden states to perform implicit reasoning.
We find that this approach enables solving tasks previously not solvable without explicit chain-of-thought, at a speed comparable to no chain-of-thought.
arXiv Detail & Related papers (2023-11-02T17:59:49Z) - Phenomenal Yet Puzzling: Testing Inductive Reasoning Capabilities of Language Models with Hypothesis Refinement [92.61557711360652]
Language models (LMs) often fall short on inductive reasoning, despite achieving impressive success on research benchmarks.
We conduct a systematic study of the inductive reasoning capabilities of LMs through iterative hypothesis refinement.
We reveal several discrepancies between the inductive reasoning processes of LMs and humans, shedding light on both the potentials and limitations of using LMs in inductive reasoning tasks.
arXiv Detail & Related papers (2023-10-12T17:51:10Z) - Impossibility Results in AI: A Survey [3.198144010381572]
An impossibility theorem demonstrates that a particular problem or set of problems cannot be solved as described in the claim.
We have categorized impossibility theorems applicable to the domain of AI into five categories: deduction, indistinguishability, induction, tradeoffs, and intractability.
We conclude that deductive impossibilities deny 100%-guarantees for security.
arXiv Detail & Related papers (2021-09-01T16:52:13Z) - Inductive Biases for Deep Learning of Higher-Level Cognition [108.89281493851358]
A fascinating hypothesis is that human and animal intelligence could be explained by a few principles.
This work considers a larger list, focusing on those which concern mostly higher-level and sequential conscious processing.
The objective of clarifying these particular principles is that they could potentially help us build AI systems benefiting from humans' abilities.
arXiv Detail & Related papers (2020-11-30T18:29:25Z) - Explaining AI as an Exploratory Process: The Peircean Abduction Model [0.2676349883103404]
Abductive inference has been defined in many ways.
Challenge of implementing abductive reasoning and the challenge of automating the explanation process are closely linked.
This analysis provides a theoretical framework for understanding what the XAI researchers are already doing.
arXiv Detail & Related papers (2020-09-30T17:10:37Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.