What we are is more than what we do
- URL: http://arxiv.org/abs/2102.04219v1
- Date: Thu, 21 Jan 2021 19:26:15 GMT
- Title: What we are is more than what we do
- Authors: Larissa Albantakis and Giulio Tononi
- Abstract summary: Complex behavior becomes meaningless if it is not performed by a conscious being.
The dissociation between "being" and "doing" is most salient in artificial general intelligence.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: If we take the subjective character of consciousness seriously, consciousness
becomes a matter of "being" rather than "doing". Because "doing" can be
dissociated from "being", functional criteria alone are insufficient to decide
whether a system possesses the necessary requirements for being a physical
substrate of consciousness. The dissociation between "being" and "doing" is
most salient in artificial general intelligence, which may soon replicate any
human capacity: computers can perform complex functions (in the limit
resembling human behavior) in the absence of consciousness. Complex behavior
becomes meaningless if it is not performed by a conscious being.
Related papers
- Why Is Anything Conscious? [0.0]
We provide a mathematical formalism describing how biological systems self-organise to hierarchically interpret unlabelled sensory information.
We claim that access consciousness at the human level is impossible without the ability to hierarchically model i) the self, ii) the world/others andiii) the self as modelled by others.
arXiv Detail & Related papers (2024-09-22T18:01:30Z) - Consciousness defined: requirements for biological and artificial general intelligence [0.0]
Critically, consciousness is the apparatus that provides the ability to make decisions, but it is not defined by the decision itself.
requirements for consciousness include: at least some capability for perception, a memory for the storage of such perceptual information.
We can objectively determine consciousness in any conceivable agent, such as non-human animals and artificially intelligent systems.
arXiv Detail & Related papers (2024-06-03T14:20:56Z) - Can a Machine be Conscious? Towards Universal Criteria for Machine Consciousness [0.0]
Many concerns have been voiced about the ramifications of creating an artificial conscious entity.
This is compounded by a marked lack of consensus around what constitutes consciousness.
We propose five criteria for determining whether a machine is conscious.
arXiv Detail & Related papers (2024-04-19T18:38:22Z) - Is artificial consciousness achievable? Lessons from the human brain [0.0]
We analyse the question of developing artificial consciousness from an evolutionary perspective.
We take the evolution of the human brain and its relation with consciousness as a reference model.
We propose to clearly specify what is common and what differs in AI conscious processing from full human conscious experience.
arXiv Detail & Related papers (2024-04-18T12:59:44Z) - COKE: A Cognitive Knowledge Graph for Machine Theory of Mind [87.14703659509502]
Theory of mind (ToM) refers to humans' ability to understand and infer the desires, beliefs, and intentions of others.
COKE is the first cognitive knowledge graph for machine theory of mind.
arXiv Detail & Related papers (2023-05-09T12:36:58Z) - Sources of Richness and Ineffability for Phenomenally Conscious States [57.8137804587998]
We provide an information theoretic dynamical systems perspective on the richness and ineffability of consciousness.
In our framework, the richness of conscious experience corresponds to the amount of information in a conscious state.
While our model may not settle all questions relating to the explanatory gap, it makes progress toward a fully physicalist explanation.
arXiv Detail & Related papers (2023-02-13T14:41:04Z) - On the independence between phenomenal consciousness and computational
intelligence [0.0]
We argue in this paper how phenomenal consciousness and, at least, computational intelligence are independent.
As phenomenal consciousness and computational intelligence are independent, this fact has critical implications for society.
arXiv Detail & Related papers (2022-08-03T16:17:11Z) - AGENT: A Benchmark for Core Psychological Reasoning [60.35621718321559]
Intuitive psychology is the ability to reason about hidden mental variables that drive observable actions.
Despite recent interest in machine agents that reason about other agents, it is not clear if such agents learn or hold the core psychology principles that drive human reasoning.
We present a benchmark consisting of procedurally generated 3D animations, AGENT, structured around four scenarios.
arXiv Detail & Related papers (2021-02-24T14:58:23Z) - Inductive Biases for Deep Learning of Higher-Level Cognition [108.89281493851358]
A fascinating hypothesis is that human and animal intelligence could be explained by a few principles.
This work considers a larger list, focusing on those which concern mostly higher-level and sequential conscious processing.
The objective of clarifying these particular principles is that they could potentially help us build AI systems benefiting from humans' abilities.
arXiv Detail & Related papers (2020-11-30T18:29:25Z) - Machine Common Sense [77.34726150561087]
Machine common sense remains a broad, potentially unbounded problem in artificial intelligence (AI)
This article deals with the aspects of modeling commonsense reasoning focusing on such domain as interpersonal interactions.
arXiv Detail & Related papers (2020-06-15T13:59:47Z) - Dark, Beyond Deep: A Paradigm Shift to Cognitive AI with Humanlike
Common Sense [142.53911271465344]
We argue that the next generation of AI must embrace "dark" humanlike common sense for solving novel tasks.
We identify functionality, physics, intent, causality, and utility (FPICU) as the five core domains of cognitive AI with humanlike common sense.
arXiv Detail & Related papers (2020-04-20T04:07:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.