An argument for the impossibility of machine intelligence
- URL: http://arxiv.org/abs/2111.07765v1
- Date: Wed, 20 Oct 2021 08:54:48 GMT
- Title: An argument for the impossibility of machine intelligence
- Authors: Jobst Landgrebe, Barry Smith
- Abstract summary: We define what it is to be an agent (device) that could be the bearer of AI.
We show that the mainstream definitions of intelligence' are too weak even to capture what is involved when we ascribe intelligence to an insect.
We identify the properties that an AI agent would need to possess in order to be the bearer of intelligence by this definition.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Since the noun phrase `artificial intelligence' (AI) was coined, it has been
debated whether humans are able to create intelligence using technology. We
shed new light on this question from the point of view of themodynamics and
mathematics. First, we define what it is to be an agent (device) that could be
the bearer of AI. Then we show that the mainstream definitions of
`intelligence' proposed by Hutter and others and still accepted by the AI
community are too weak even to capture what is involved when we ascribe
intelligence to an insect. We then summarise the highly useful definition of
basic (arthropod) intelligence proposed by Rodney Brooks, and we identify the
properties that an AI agent would need to possess in order to be the bearer of
intelligence by this definition. Finally, we show that, from the perspective of
the disciplines needed to create such an agent, namely mathematics and physics,
these properties are realisable by neither implicit nor explicit mathematical
design nor by setting up an environment in which an AI could evolve
spontaneously.
Related papers
- On the consistent reasoning paradox of intelligence and optimal trust in AI: The power of 'I don't know' [79.69412622010249]
Consistent reasoning, which lies at the core of human intelligence, is the ability to handle tasks that are equivalent.
CRP asserts that consistent reasoning implies fallibility -- in particular, human-like intelligence in AI necessarily comes with human-like fallibility.
arXiv Detail & Related papers (2024-08-05T10:06:53Z) - Position Paper: Agent AI Towards a Holistic Intelligence [53.35971598180146]
We emphasize developing Agent AI -- an embodied system that integrates large foundation models into agent actions.
In this paper, we propose a novel large action model to achieve embodied intelligent behavior, the Agent Foundation Model.
arXiv Detail & Related papers (2024-02-28T16:09:56Z) - AI-as-exploration: Navigating intelligence space [0.05657375260432172]
I articulate the contours of a rather neglected but central scientific role that AI has to play.
The basic thrust of AI-as-exploration is that of creating and studying systems that can reveal candidate building blocks of intelligence.
arXiv Detail & Related papers (2024-01-15T21:06:20Z) - On a Functional Definition of Intelligence [0.0]
Without an agreed-upon definition of intelligence, asking "is this system intelligent?"" is an untestable question.
Most work on precisely capturing what we mean by "intelligence" has come from the fields of philosophy, psychology, and cognitive science.
We present an argument for a purely functional, black-box definition of intelligence, distinct from how that intelligence is actually achieved.
arXiv Detail & Related papers (2023-12-15T05:46:49Z) - The Generative AI Paradox: "What It Can Create, It May Not Understand" [81.89252713236746]
Recent wave of generative AI has sparked excitement and concern over potentially superhuman levels of artificial intelligence.
At the same time, models still show basic errors in understanding that would not be expected even in non-expert humans.
This presents us with an apparent paradox: how do we reconcile seemingly superhuman capabilities with the persistence of errors that few humans would make?
arXiv Detail & Related papers (2023-10-31T18:07:07Z) - AI for Mathematics: A Cognitive Science Perspective [86.02346372284292]
Mathematics is one of the most powerful conceptual systems developed and used by the human species.
Rapid progress in AI, particularly propelled by advances in large language models (LLMs), has sparked renewed, widespread interest in building such systems.
arXiv Detail & Related papers (2023-10-19T02:00:31Z) - Do Artificial Intelligence Systems Understand? [0.0]
It is not necessary to attribute understanding to a machine in order to explain its exhibited "intelligent" behavior.
A merely syntactic and mechanistic approach to intelligence as a task-solving tool suffices to justify the range of operations that it can display.
arXiv Detail & Related papers (2022-07-22T13:57:02Z) - Inductive Biases for Deep Learning of Higher-Level Cognition [108.89281493851358]
A fascinating hypothesis is that human and animal intelligence could be explained by a few principles.
This work considers a larger list, focusing on those which concern mostly higher-level and sequential conscious processing.
The objective of clarifying these particular principles is that they could potentially help us build AI systems benefiting from humans' abilities.
arXiv Detail & Related papers (2020-11-30T18:29:25Z) - Computing Machinery and Knowledge [0.0]
The paper argues that it is possible for an AI agent to know and examines this from both current state-of-the-art in artificial intelligence as well as from the perspective of what the future AI development might bring in terms of superintelligent AI agents.
arXiv Detail & Related papers (2020-10-31T09:27:53Z) - Future Trends for Human-AI Collaboration: A Comprehensive Taxonomy of
AI/AGI Using Multiple Intelligences and Learning Styles [95.58955174499371]
We describe various aspects of multiple human intelligences and learning styles, which may impact on a variety of AI problem domains.
Future AI systems will be able not only to communicate with human users and each other, but also to efficiently exchange knowledge and wisdom.
arXiv Detail & Related papers (2020-08-07T21:00:13Z) - Is Intelligence Artificial? [0.0]
This paper attempts to give a unifying definition that can be applied to the natural world in general and then Artificial Intelligence.
A metric that is grounded in Kolmogorov's Complexity Theory is suggested, which leads to a measurement about entropy.
A version of an accepted AI test is then put forward as the 'acid test' and might be what a free-thinking program would try to achieve.
arXiv Detail & Related papers (2014-03-05T11:09:55Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.