Natural, Artificial, and Human Intelligences
- URL: http://arxiv.org/abs/2506.02183v1
- Date: Mon, 02 Jun 2025 19:11:49 GMT
- Title: Natural, Artificial, and Human Intelligences
- Authors: Emmanuel M. Pothos, Dominic Widdows,
- Abstract summary: We think that, together with language, there are four essential ingredients, which can be summarised as invention, capacity for complex inference, embodiment, and self-awareness.<n>For the most unique accomplishments of human intelligence, we think that, together with language, there are four essential ingredients, which can be summarised as invention, capacity for complex inference, embodiment, and self-awareness.
- Score: 0.046040036610482664
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Human achievement, whether in culture, science, or technology, is unparalleled in the known existence. This achievement is tied to the enormous communities of knowledge, made possible by (especially written) language: leaving theological content aside, it is very much true that "in the beginning was the word". There lies the challenge regarding modern age chatbots: they can 'do' language apparently as well as ourselves and there is a natural question of whether they can be considered intelligent, in the same way as we are or otherwise. Are humans uniquely intelligent? We consider this question in terms of the psychological literature on intelligence, evidence for intelligence in non-human animals, the role of written language in science and technology, progress with artificial intelligence, the history of intelligence testing (for both humans and machines), and the role of embodiment in intelligence. For the most unique accomplishments of human intelligence (such as music symphonies or complex scientific theories), we think that, together with language, there are four essential ingredients, which can be summarised as invention, capacity for complex inference, embodiment, and self-awareness. This conclusion makes untenable the position that human intelligence differs qualitatively from that of many non-human animals, since, with the exception of complex language, all the other requirements are fulfilled. Regarding chatbots, the current limitations are localised to the lack of embodiment and (apparent) lack of awareness.
Related papers
- Evaluating Intelligence via Trial and Error [59.80426744891971]
We introduce Survival Game as a framework to evaluate intelligence based on the number of failed attempts in a trial-and-error process.<n>When the expectation and variance of failure counts are both finite, it signals the ability to consistently find solutions to new challenges.<n>Our results show that while AI systems achieve the Autonomous Level in simple tasks, they are still far from it in more complex tasks.
arXiv Detail & Related papers (2025-02-26T05:59:45Z) - On the consistent reasoning paradox of intelligence and optimal trust in AI: The power of 'I don't know' [79.69412622010249]
Consistent reasoning, which lies at the core of human intelligence, is the ability to handle tasks that are equivalent.
CRP asserts that consistent reasoning implies fallibility -- in particular, human-like intelligence in AI necessarily comes with human-like fallibility.
arXiv Detail & Related papers (2024-08-05T10:06:53Z) - AI-as-exploration: Navigating intelligence space [0.05657375260432172]
I articulate the contours of a rather neglected but central scientific role that AI has to play.
The basic thrust of AI-as-exploration is that of creating and studying systems that can reveal candidate building blocks of intelligence.
arXiv Detail & Related papers (2024-01-15T21:06:20Z) - On a Functional Definition of Intelligence [0.0]
Without an agreed-upon definition of intelligence, asking "is this system intelligent?"" is an untestable question.
Most work on precisely capturing what we mean by "intelligence" has come from the fields of philosophy, psychology, and cognitive science.
We present an argument for a purely functional, black-box definition of intelligence, distinct from how that intelligence is actually achieved.
arXiv Detail & Related papers (2023-12-15T05:46:49Z) - The Generative AI Paradox: "What It Can Create, It May Not Understand" [81.89252713236746]
Recent wave of generative AI has sparked excitement and concern over potentially superhuman levels of artificial intelligence.
At the same time, models still show basic errors in understanding that would not be expected even in non-expert humans.
This presents us with an apparent paradox: how do we reconcile seemingly superhuman capabilities with the persistence of errors that few humans would make?
arXiv Detail & Related papers (2023-10-31T18:07:07Z) - AI for Mathematics: A Cognitive Science Perspective [86.02346372284292]
Mathematics is one of the most powerful conceptual systems developed and used by the human species.
Rapid progress in AI, particularly propelled by advances in large language models (LLMs), has sparked renewed, widespread interest in building such systems.
arXiv Detail & Related papers (2023-10-19T02:00:31Z) - The Nature of Intelligence [0.0]
The essence of intelligence commonly represented by both humans and AI is unknown.
We show that the nature of intelligence is a series of mathematically functional processes that minimize system entropy.
This essay should be a starting point for a deeper understanding of the universe and us as human beings.
arXiv Detail & Related papers (2023-07-20T23:11:59Z) - Understanding Natural Language Understanding Systems. A Critical
Analysis [91.81211519327161]
The development of machines that guillemotlefttalk like usguillemotright, also known as Natural Language Understanding (NLU) systems, is the Holy Grail of Artificial Intelligence (AI)
But never has the trust that we can build guillemotlefttalking machinesguillemotright been stronger than the one engendered by the last generation of NLU systems.
Are we at the dawn of a new era, in which the Grail is finally closer to us?
arXiv Detail & Related papers (2023-03-01T08:32:55Z) - An argument for the impossibility of machine intelligence [0.0]
We define what it is to be an agent (device) that could be the bearer of AI.
We show that the mainstream definitions of intelligence' are too weak even to capture what is involved when we ascribe intelligence to an insect.
We identify the properties that an AI agent would need to possess in order to be the bearer of intelligence by this definition.
arXiv Detail & Related papers (2021-10-20T08:54:48Z) - Inductive Biases for Deep Learning of Higher-Level Cognition [108.89281493851358]
A fascinating hypothesis is that human and animal intelligence could be explained by a few principles.
This work considers a larger list, focusing on those which concern mostly higher-level and sequential conscious processing.
The objective of clarifying these particular principles is that they could potentially help us build AI systems benefiting from humans' abilities.
arXiv Detail & Related papers (2020-11-30T18:29:25Z) - Understanding Human Intelligence through Human Limitations [9.594432031144715]
I argue that we can understand human intelligence, and the ways in which it may differ from artificial intelligence.
I claim that these problems acquire their structure from three fundamental limitations that apply to human beings.
arXiv Detail & Related papers (2020-09-29T14:37:12Z) - Future Trends for Human-AI Collaboration: A Comprehensive Taxonomy of
AI/AGI Using Multiple Intelligences and Learning Styles [95.58955174499371]
We describe various aspects of multiple human intelligences and learning styles, which may impact on a variety of AI problem domains.
Future AI systems will be able not only to communicate with human users and each other, but also to efficiently exchange knowledge and wisdom.
arXiv Detail & Related papers (2020-08-07T21:00:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.