Dissociating Artificial Intelligence from Artificial Consciousness
- URL: http://arxiv.org/abs/2412.04571v1
- Date: Thu, 05 Dec 2024 19:28:35 GMT
- Title: Dissociating Artificial Intelligence from Artificial Consciousness
- Authors: Graham Findlay, William Marshall, Larissa Albantakis, Isaac David, William GP Mayner, Christof Koch, Giulio Tononi,
- Abstract summary: Developments in machine learning and computing power suggest that artificial general intelligence is within reach.
This raises the question of artificial consciousness: if a computer were to be functionally equivalent to a human, would it experience sights, sounds, and thoughts, as we do when we are conscious?
We employ Integrated Information Theory (IIT), which provides principled tools to determine whether a system is conscious.
- Score: 0.4537124110113416
- License:
- Abstract: Developments in machine learning and computing power suggest that artificial general intelligence is within reach. This raises the question of artificial consciousness: if a computer were to be functionally equivalent to a human, being able to do all we do, would it experience sights, sounds, and thoughts, as we do when we are conscious? Answering this question in a principled manner can only be done on the basis of a theory of consciousness that is grounded in phenomenology and that states the necessary and sufficient conditions for any system, evolved or engineered, to support subjective experience. Here we employ Integrated Information Theory (IIT), which provides principled tools to determine whether a system is conscious, to what degree, and the content of its experience. We consider pairs of systems constituted of simple Boolean units, one of which -- a basic stored-program computer -- simulates the other with full functional equivalence. By applying the principles of IIT, we demonstrate that (i) two systems can be functionally equivalent without being phenomenally equivalent, and (ii) that this conclusion is not dependent on the simulated system's function. We further demonstrate that, according to IIT, it is possible for a digital computer to simulate our behavior, possibly even by simulating the neurons in our brain, without replicating our experience. This contrasts sharply with computational functionalism, the thesis that performing computations of the right kind is necessary and sufficient for consciousness.
Related papers
- Analyzing Advanced AI Systems Against Definitions of Life and Consciousness [0.0]
We propose a number of metrics for examining whether an advanced AI system has gained consciousness.
We suggest that sufficiently advanced architectures exhibiting immune like sabotage defenses, mirror self-recognition analogs, or meta-cognitive updates may cross key thresholds akin to life-like or consciousness-like traits.
arXiv Detail & Related papers (2025-02-07T15:27:34Z) - On a heuristic approach to the description of consciousness as a hypercomplex system state and the possibility of machine consciousness (German edition) [0.0]
This article shows that the inner states of consciousness experienced by every human being have a physical but imaginary hypercomplex basis.
Based on theoretical considerations, it could be possible - as a result of mathematical investigations into a so-called bicomplex algebra - to generate and use hypercomplex system states on machines.
arXiv Detail & Related papers (2024-09-03T17:55:57Z) - Artificial General Intelligence (AGI)-Native Wireless Systems: A Journey Beyond 6G [58.440115433585824]
Building future wireless systems that support services like digital twins (DTs) is challenging to achieve through advances to conventional technologies like meta-surfaces.
While artificial intelligence (AI)-native networks promise to overcome some limitations of wireless technologies, developments still rely on AI tools like neural networks.
This paper revisits the concept of AI-native wireless systems, equipping them with the common sense necessary to transform them into artificial general intelligence (AGI)-native systems.
arXiv Detail & Related papers (2024-04-29T04:51:05Z) - AI Consciousness is Inevitable: A Theoretical Computer Science Perspective [0.0]
We look at consciousness through the lens of Theoretical Computer Science, a branch of mathematics that studies computation under resource limitations.
We develop a formal machine model for consciousness inspired by Alan Turing's simple yet powerful model of computation and Bernard Baars' theater model of consciousness.
arXiv Detail & Related papers (2024-03-25T18:38:54Z) - On the Emergence of Symmetrical Reality [51.21203247240322]
We introduce the symmetrical reality framework, which offers a unified representation encompassing various forms of physical-virtual amalgamations.
We propose an instance of an AI-driven active assistance service that illustrates the potential applications of symmetrical reality.
arXiv Detail & Related papers (2024-01-26T16:09:39Z) - A Survey on Brain-Inspired Deep Learning via Predictive Coding [85.93245078403875]
Predictive coding (PC) has shown promising performance in machine intelligence tasks.
PC can model information processing in different brain areas, can be used in cognitive control and robotics.
arXiv Detail & Related papers (2023-08-15T16:37:16Z) - How (and Why) to Think that the Brain is Literally a Computer [0.0]
The relationship between brains and computers is often taken to be merely metaphorical.
The relationship between brains and computers is often taken to be merely metaphorical.
arXiv Detail & Related papers (2022-08-24T15:38:10Z) - Neurocompositional computing: From the Central Paradox of Cognition to a
new generation of AI systems [120.297940190903]
Recent progress in AI has resulted from the use of limited forms of neurocompositional computing.
New, deeper forms of neurocompositional computing create AI systems that are more robust, accurate, and comprehensible.
arXiv Detail & Related papers (2022-05-02T18:00:10Z) - Inductive Biases for Deep Learning of Higher-Level Cognition [108.89281493851358]
A fascinating hypothesis is that human and animal intelligence could be explained by a few principles.
This work considers a larger list, focusing on those which concern mostly higher-level and sequential conscious processing.
The objective of clarifying these particular principles is that they could potentially help us build AI systems benefiting from humans' abilities.
arXiv Detail & Related papers (2020-11-30T18:29:25Z) - Machine Common Sense [77.34726150561087]
Machine common sense remains a broad, potentially unbounded problem in artificial intelligence (AI)
This article deals with the aspects of modeling commonsense reasoning focusing on such domain as interpersonal interactions.
arXiv Detail & Related papers (2020-06-15T13:59:47Z) - The Mode of Computing [0.0]
Mental processes performed by natural brains are often thought of informally as computing process and that the brain is alike to computing machinery.
A proposal to such an effect is that natural computing appeared when interpretations were first made by biological entities.
By analogy with computing machinery, there must be a system level at the top of the neural circuitry and directly below the knowledge level that is named here The mode of Natural Computing.
arXiv Detail & Related papers (2019-03-25T19:25:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.