Artificial intelligence is algorithmic mimicry: why artificial "agents"
are not (and won't be) proper agents
- URL: http://arxiv.org/abs/2307.07515v4
- Date: Thu, 22 Feb 2024 08:48:13 GMT
- Title: Artificial intelligence is algorithmic mimicry: why artificial "agents"
are not (and won't be) proper agents
- Authors: Johannes Jaeger
- Abstract summary: I investigate what is the prospect of developing artificial general intelligence (AGI)
I compare living and algorithmic systems, with a special focus on the notion of "agency"
It is extremely unlikely that true AGI can be developed in the current algorithmic framework of AI research.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: What is the prospect of developing artificial general intelligence (AGI)? I
investigate this question by systematically comparing living and algorithmic
systems, with a special focus on the notion of "agency." There are three
fundamental differences to consider: (1) Living systems are autopoietic, that
is, self-manufacturing, and therefore able to set their own intrinsic goals,
while algorithms exist in a computational environment with target functions
that are both provided by an external agent. (2) Living systems are embodied in
the sense that there is no separation between their symbolic and physical
aspects, while algorithms run on computational architectures that maximally
isolate software from hardware. (3) Living systems experience a large world, in
which most problems are ill-defined (and not all definable), while algorithms
exist in a small world, in which all problems are well-defined. These three
differences imply that living and algorithmic systems have very different
capabilities and limitations. In particular, it is extremely unlikely that true
AGI (beyond mere mimicry) can be developed in the current algorithmic framework
of AI research. Consequently, discussions about the proper development and
deployment of algorithmic tools should be shaped around the dangers and
opportunities of current narrow AI, not the extremely unlikely prospect of the
emergence of true agency in artificial systems.
Related papers
- Imagining and building wise machines: The centrality of AI metacognition [78.76893632793497]
We argue that shortcomings stem from one overarching failure: AI systems lack wisdom.
While AI research has focused on task-level strategies, metacognition is underdeveloped in AI systems.
We propose that integrating metacognitive capabilities into AI systems is crucial for enhancing their robustness, explainability, cooperation, and safety.
arXiv Detail & Related papers (2024-11-04T18:10:10Z) - AI for Mathematics: A Cognitive Science Perspective [86.02346372284292]
Mathematics is one of the most powerful conceptual systems developed and used by the human species.
Rapid progress in AI, particularly propelled by advances in large language models (LLMs), has sparked renewed, widespread interest in building such systems.
arXiv Detail & Related papers (2023-10-19T02:00:31Z) - Brain-Inspired Computational Intelligence via Predictive Coding [89.6335791546526]
Predictive coding (PC) has shown promising performance in machine intelligence tasks.
PC can model information processing in different brain areas, can be used in cognitive control and robotics.
arXiv Detail & Related papers (2023-08-15T16:37:16Z) - General Purpose Artificial Intelligence Systems (GPAIS): Properties,
Definition, Taxonomy, Societal Implications and Responsible Governance [16.030931070783637]
General-Purpose Artificial Intelligence Systems (GPAIS) has been defined to refer to these AI systems.
To date, the possibility of an Artificial General Intelligence, powerful enough to perform any intellectual task as if it were human, or even improve it, has remained an aspiration, fiction, and considered a risk for our society.
This work discusses existing definitions for GPAIS and proposes a new definition that allows for a gradual differentiation among types of GPAIS according to their properties and limitations.
arXiv Detail & Related papers (2023-07-26T16:35:48Z) - An Initial Look at Self-Reprogramming Artificial Intelligence [0.0]
We develop and experimentally validate the first fully self-reprogramming AI system.
Applying AI-based computer code generation to AI itself, we implement an algorithm with the ability to continuously modify and rewrite its own neural network source code.
arXiv Detail & Related papers (2022-04-30T05:44:34Z) - Thinking Fast and Slow in AI: the Role of Metacognition [35.114607887343105]
State-of-the-art AI still lacks many capabilities that would naturally be included in a notion of (human) intelligence.
We argue that a better study of the mechanisms that allow humans to have these capabilities can help us understand how to imbue AI systems with these competencies.
arXiv Detail & Related papers (2021-10-05T06:05:38Z) - A User-Centred Framework for Explainable Artificial Intelligence in
Human-Robot Interaction [70.11080854486953]
We propose a user-centred framework for XAI that focuses on its social-interactive aspect.
The framework aims to provide a structure for interactive XAI solutions thought for non-expert users.
arXiv Detail & Related papers (2021-09-27T09:56:23Z) - Hybrid Intelligence [4.508830262248694]
We argue that the most likely paradigm for the division of labor between humans and machines in the next decades is Hybrid Intelligence.
This concept aims at using the complementary strengths of human intelligence and AI, so that they can perform better than each of the two could separately.
arXiv Detail & Related papers (2021-05-03T08:56:09Z) - Exploring the Nuances of Designing (with/for) Artificial Intelligence [0.0]
We explore the construct of infrastructure as a means to simultaneously address algorithmic and societal issues when designing AI.
Neither algorithmic solutions, nor purely humanistic ones will be enough to fully undesirable outcomes in the narrow state of AI.
arXiv Detail & Related papers (2020-10-22T20:34:35Z) - Machine Common Sense [77.34726150561087]
Machine common sense remains a broad, potentially unbounded problem in artificial intelligence (AI)
This article deals with the aspects of modeling commonsense reasoning focusing on such domain as interpersonal interactions.
arXiv Detail & Related papers (2020-06-15T13:59:47Z) - Distributed and Democratized Learning: Philosophy and Research
Challenges [80.39805582015133]
We propose a novel design philosophy called democratized learning (Dem-AI)
Inspired by the societal groups of humans, the specialized groups of learning agents in the proposed Dem-AI system are self-organized in a hierarchical structure to collectively perform learning tasks more efficiently.
We present a reference design as a guideline to realize future Dem-AI systems, inspired by various interdisciplinary fields.
arXiv Detail & Related papers (2020-03-18T08:45:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.