Do Artificial Intelligence Systems Understand?
- URL: http://arxiv.org/abs/2207.11089v1
- Date: Fri, 22 Jul 2022 13:57:02 GMT
- Title: Do Artificial Intelligence Systems Understand?
- Authors: Eduardo C. Garrido-Merch\'an, Carlos Blanco
- Abstract summary: It is not necessary to attribute understanding to a machine in order to explain its exhibited "intelligent" behavior.
A merely syntactic and mechanistic approach to intelligence as a task-solving tool suffices to justify the range of operations that it can display.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Are intelligent machines really intelligent? Is the underlying philosophical
concept of intelligence satisfactory for describing how the present systems
work? Is understanding a necessary and sufficient condition for intelligence?
If a machine could understand, should we attribute subjectivity to it? This
paper addresses the problem of deciding whether the so-called "intelligent
machines" are capable of understanding, instead of merely processing signs. It
deals with the relationship between syntaxis and semantics. The main thesis
concerns the inevitability of semantics for any discussion about the
possibility of building conscious machines, condensed into the following two
tenets: "If a machine is capable of understanding (in the strong sense), then
it must be capable of combining rules and intuitions"; "If semantics cannot be
reduced to syntaxis, then a machine cannot understand." Our conclusion states
that it is not necessary to attribute understanding to a machine in order to
explain its exhibited "intelligent" behavior; a merely syntactic and
mechanistic approach to intelligence as a task-solving tool suffices to justify
the range of operations that it can display in the current state of
technological development.
Related papers
- Imagining and building wise machines: The centrality of AI metacognition [78.76893632793497]
We argue that shortcomings stem from one overarching failure: AI systems lack wisdom.
While AI research has focused on task-level strategies, metacognition is underdeveloped in AI systems.
We propose that integrating metacognitive capabilities into AI systems is crucial for enhancing their robustness, explainability, cooperation, and safety.
arXiv Detail & Related papers (2024-11-04T18:10:10Z) - On the consistent reasoning paradox of intelligence and optimal trust in AI: The power of 'I don't know' [79.69412622010249]
Consistent reasoning, which lies at the core of human intelligence, is the ability to handle tasks that are equivalent.
CRP asserts that consistent reasoning implies fallibility -- in particular, human-like intelligence in AI necessarily comes with human-like fallibility.
arXiv Detail & Related papers (2024-08-05T10:06:53Z) - Machine learning and information theory concepts towards an AI
Mathematician [77.63761356203105]
The current state-of-the-art in artificial intelligence is impressive, especially in terms of mastery of language, but not so much in terms of mathematical reasoning.
This essay builds on the idea that current deep learning mostly succeeds at system 1 abilities.
It takes an information-theoretical posture to ask questions about what constitutes an interesting mathematical statement.
arXiv Detail & Related papers (2024-03-07T15:12:06Z) - On a Functional Definition of Intelligence [0.0]
Without an agreed-upon definition of intelligence, asking "is this system intelligent?"" is an untestable question.
Most work on precisely capturing what we mean by "intelligence" has come from the fields of philosophy, psychology, and cognitive science.
We present an argument for a purely functional, black-box definition of intelligence, distinct from how that intelligence is actually achieved.
arXiv Detail & Related papers (2023-12-15T05:46:49Z) - A Review on Objective-Driven Artificial Intelligence [0.0]
Humans have an innate ability to understand context, nuances, and subtle cues in communication.
Humans possess a vast repository of common-sense knowledge that helps us make logical inferences and predictions about the world.
Machines lack this innate understanding and often struggle with making sense of situations that humans find trivial.
arXiv Detail & Related papers (2023-08-20T02:07:42Z) - Understanding Natural Language Understanding Systems. A Critical
Analysis [91.81211519327161]
The development of machines that guillemotlefttalk like usguillemotright, also known as Natural Language Understanding (NLU) systems, is the Holy Grail of Artificial Intelligence (AI)
But never has the trust that we can build guillemotlefttalking machinesguillemotright been stronger than the one engendered by the last generation of NLU systems.
Are we at the dawn of a new era, in which the Grail is finally closer to us?
arXiv Detail & Related papers (2023-03-01T08:32:55Z) - On the independence between phenomenal consciousness and computational
intelligence [0.0]
We argue in this paper how phenomenal consciousness and, at least, computational intelligence are independent.
As phenomenal consciousness and computational intelligence are independent, this fact has critical implications for society.
arXiv Detail & Related papers (2022-08-03T16:17:11Z) - Emergence of Machine Language: Towards Symbolic Intelligence with Neural
Networks [73.94290462239061]
We propose to combine symbolism and connectionism principles by using neural networks to derive a discrete representation.
By designing an interactive environment and task, we demonstrated that machines could generate a spontaneous, flexible, and semantic language.
arXiv Detail & Related papers (2022-01-14T14:54:58Z) - An argument for the impossibility of machine intelligence [0.0]
We define what it is to be an agent (device) that could be the bearer of AI.
We show that the mainstream definitions of intelligence' are too weak even to capture what is involved when we ascribe intelligence to an insect.
We identify the properties that an AI agent would need to possess in order to be the bearer of intelligence by this definition.
arXiv Detail & Related papers (2021-10-20T08:54:48Z) - Inductive Biases for Deep Learning of Higher-Level Cognition [108.89281493851358]
A fascinating hypothesis is that human and animal intelligence could be explained by a few principles.
This work considers a larger list, focusing on those which concern mostly higher-level and sequential conscious processing.
The objective of clarifying these particular principles is that they could potentially help us build AI systems benefiting from humans' abilities.
arXiv Detail & Related papers (2020-11-30T18:29:25Z) - Is Intelligence Artificial? [0.0]
This paper attempts to give a unifying definition that can be applied to the natural world in general and then Artificial Intelligence.
A metric that is grounded in Kolmogorov's Complexity Theory is suggested, which leads to a measurement about entropy.
A version of an accepted AI test is then put forward as the 'acid test' and might be what a free-thinking program would try to achieve.
arXiv Detail & Related papers (2014-03-05T11:09:55Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.