On a Functional Definition of Intelligence
- URL: http://arxiv.org/abs/2312.09546v1
- Date: Fri, 15 Dec 2023 05:46:49 GMT
- Title: On a Functional Definition of Intelligence
- Authors: Warisa Sritriratanarak and Paulo Garcia
- Abstract summary: Without an agreed-upon definition of intelligence, asking "is this system intelligent?"" is an untestable question.
Most work on precisely capturing what we mean by "intelligence" has come from the fields of philosophy, psychology, and cognitive science.
We present an argument for a purely functional, black-box definition of intelligence, distinct from how that intelligence is actually achieved.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Without an agreed-upon definition of intelligence, asking "is this system
intelligent?"" is an untestable question. This lack of consensus hinders
research, and public perception, on Artificial Intelligence (AI), particularly
since the rise of generative- and large-language models. Most work on precisely
capturing what we mean by "intelligence" has come from the fields of
philosophy, psychology, and cognitive science. Because these perspectives are
intrinsically linked to intelligence as it is demonstrated by natural
creatures, we argue such fields cannot, and will not, provide a sufficiently
rigorous definition that can be applied to artificial means. Thus, we present
an argument for a purely functional, black-box definition of intelligence,
distinct from how that intelligence is actually achieved; focusing on the
"what", rather than the "how". To achieve this, we first distinguish other
related concepts (sentience, sensation, agency, etc.) from the notion of
intelligence, particularly identifying how these concepts pertain to artificial
intelligent systems. As a result, we achieve a formal definition of
intelligence that is conceptually testable from only external observation, that
suggests intelligence is a continuous variable. We conclude by identifying
challenges that still remain towards quantifiable measurement. This work
provides a useful perspective for both the development of AI, and for public
perception of the capabilities and risks of AI.
Related papers
- Imagining and building wise machines: The centrality of AI metacognition [78.76893632793497]
We argue that shortcomings stem from one overarching failure: AI systems lack wisdom.
While AI research has focused on task-level strategies, metacognition is underdeveloped in AI systems.
We propose that integrating metacognitive capabilities into AI systems is crucial for enhancing their robustness, explainability, cooperation, and safety.
arXiv Detail & Related papers (2024-11-04T18:10:10Z) - On the consistent reasoning paradox of intelligence and optimal trust in AI: The power of 'I don't know' [79.69412622010249]
Consistent reasoning, which lies at the core of human intelligence, is the ability to handle tasks that are equivalent.
CRP asserts that consistent reasoning implies fallibility -- in particular, human-like intelligence in AI necessarily comes with human-like fallibility.
arXiv Detail & Related papers (2024-08-05T10:06:53Z) - The Generative AI Paradox: "What It Can Create, It May Not Understand" [81.89252713236746]
Recent wave of generative AI has sparked excitement and concern over potentially superhuman levels of artificial intelligence.
At the same time, models still show basic errors in understanding that would not be expected even in non-expert humans.
This presents us with an apparent paradox: how do we reconcile seemingly superhuman capabilities with the persistence of errors that few humans would make?
arXiv Detail & Related papers (2023-10-31T18:07:07Z) - AI for Mathematics: A Cognitive Science Perspective [86.02346372284292]
Mathematics is one of the most powerful conceptual systems developed and used by the human species.
Rapid progress in AI, particularly propelled by advances in large language models (LLMs), has sparked renewed, widespread interest in building such systems.
arXiv Detail & Related papers (2023-10-19T02:00:31Z) - A Theory of Intelligences [0.0]
I develop a framework that applies across all systems from physics, to biology, humans and AI.
I present general equations for intelligence and its components, and a simple expression for the evolution of intelligence traits.
arXiv Detail & Related papers (2023-08-23T20:18:43Z) - Suffering Toasters -- A New Self-Awareness Test for AI [0.0]
We argue that all current intelligence tests are insufficient to point to the existence or lack of intelligence.
We propose a new approach to test for artificial self-awareness and outline a possible implementation.
arXiv Detail & Related papers (2023-06-29T18:58:01Z) - Defining and Explorting the Intelligence Space [0.0]
This article lays out a cascade of definitions that induces both a nested hierarchy of three levels of intelligence and a wider-ranging space that is built around them and approximations to them.
Within this intelligence space, regions are identified that correspond to both natural -- most particularly, human -- intelligence and artificial intelligence (AI)
These definitions are then exploited in early explorations of four more advanced, and likely more controversial, topics: the singularity, generative AI, ethics, and intellectual property.
arXiv Detail & Related papers (2023-06-10T18:05:16Z) - An argument for the impossibility of machine intelligence [0.0]
We define what it is to be an agent (device) that could be the bearer of AI.
We show that the mainstream definitions of intelligence' are too weak even to capture what is involved when we ascribe intelligence to an insect.
We identify the properties that an AI agent would need to possess in order to be the bearer of intelligence by this definition.
arXiv Detail & Related papers (2021-10-20T08:54:48Z) - Inductive Biases for Deep Learning of Higher-Level Cognition [108.89281493851358]
A fascinating hypothesis is that human and animal intelligence could be explained by a few principles.
This work considers a larger list, focusing on those which concern mostly higher-level and sequential conscious processing.
The objective of clarifying these particular principles is that they could potentially help us build AI systems benefiting from humans' abilities.
arXiv Detail & Related papers (2020-11-30T18:29:25Z) - Future Trends for Human-AI Collaboration: A Comprehensive Taxonomy of
AI/AGI Using Multiple Intelligences and Learning Styles [95.58955174499371]
We describe various aspects of multiple human intelligences and learning styles, which may impact on a variety of AI problem domains.
Future AI systems will be able not only to communicate with human users and each other, but also to efficiently exchange knowledge and wisdom.
arXiv Detail & Related papers (2020-08-07T21:00:13Z) - Is Intelligence Artificial? [0.0]
This paper attempts to give a unifying definition that can be applied to the natural world in general and then Artificial Intelligence.
A metric that is grounded in Kolmogorov's Complexity Theory is suggested, which leads to a measurement about entropy.
A version of an accepted AI test is then put forward as the 'acid test' and might be what a free-thinking program would try to achieve.
arXiv Detail & Related papers (2014-03-05T11:09:55Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.