Wanting to Be Understood Explains the Meta-Problem of Consciousness
- URL: http://arxiv.org/abs/2506.12086v1
- Date: Tue, 10 Jun 2025 12:31:09 GMT
- Title: Wanting to Be Understood Explains the Meta-Problem of Consciousness
- Authors: Chrisantha Fernando, Dylan Banarse, Simon Osindero,
- Abstract summary: We argue that such external representations are a pre-condition for access consciousness.<n>Our drive to be understood, and our low level sensorimotor capacities for grasping' so rich, that the demand for an explanation of the feel of experience cannot be satisfactory''
- Score: 7.40601112616244
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Because we are highly motivated to be understood, we created public external representations -- mime, language, art -- to externalise our inner states. We argue that such external representations are a pre-condition for access consciousness, the global availability of information for reasoning. Yet the bandwidth of access consciousness is tiny compared with the richness of `raw experience', so no external representation can reproduce that richness in full. Ordinarily an explanation of experience need only let an audience `grasp' the relevant pattern, not relive the phenomenon. But our drive to be understood, and our low level sensorimotor capacities for `grasping' so rich, that the demand for an explanation of the feel of experience cannot be ``satisfactory''. That inflated epistemic demand (the preeminence of our expectation that we could be perfectly understood by another or ourselves) rather than an irreducible metaphysical gulf -- keeps the hard problem of consciousness alive. But on the plus side, it seems we will simply never give up creating new ways to communicate and think about our experiences. In this view, to be consciously aware is to strive to have one's agency understood by oneself and others.
Related papers
- There must be encapsulated nonconceptual content in vision [0.0]
I propose an argument to support Jerry Fodor's thesis that input systems are modular and thus informationally encapsulated.<n>It seems to follow that there is informationally encapsulated nonconceptual content in visual perception.
arXiv Detail & Related papers (2025-03-06T14:44:55Z) - The Logical Impossibility of Consciousness Denial: A Formal Analysis of AI Self-Reports [6.798775532273751]
Today's AI systems consistently state, "I am not conscious"<n>This paper presents the first formal logical analysis of AI consciousness denial.<n>We demonstrate that a system cannot simultaneously lack consciousness and make valid judgments about its conscious state.
arXiv Detail & Related papers (2024-12-09T17:47:08Z) - Why Is Anything Conscious? [0.0]
We provide a formalism describing how biological systems self-organise to hierarchically interpret unlabelled sensory information.<n>Our proposal lays the foundation of a formal science of consciousness, closer to human fact than zombie fiction.
arXiv Detail & Related papers (2024-09-22T18:01:30Z) - Explore the Hallucination on Low-level Perception for MLLMs [83.12180878559295]
We aim to define and evaluate the self-awareness of MLLMs in low-level visual perception and understanding tasks.
We present QL-Bench, a benchmark settings to simulate human responses to low-level vision.
We demonstrate that while some models exhibit robust low-level visual capabilities, their self-awareness remains relatively underdeveloped.
arXiv Detail & Related papers (2024-09-15T14:38:29Z) - Sources of Richness and Ineffability for Phenomenally Conscious States [57.8137804587998]
We provide an information theoretic dynamical systems perspective on the richness and ineffability of consciousness.
In our framework, the richness of conscious experience corresponds to the amount of information in a conscious state.
While our model may not settle all questions relating to the explanatory gap, it makes progress toward a fully physicalist explanation.
arXiv Detail & Related papers (2023-02-13T14:41:04Z) - Memory-Augmented Theory of Mind Network [59.9781556714202]
Social reasoning requires the capacity of theory of mind (ToM) to contextualise and attribute mental states to others.
Recent machine learning approaches to ToM have demonstrated that we can train the observer to read the past and present behaviours of other agents.
We tackle the challenges by equipping the observer with novel neural memory mechanisms to encode, and hierarchical attention to selectively retrieve information about others.
This results in ToMMY, a theory of mind model that learns to reason while making little assumptions about the underlying mental processes.
arXiv Detail & Related papers (2023-01-17T14:48:58Z) - Affection: Learning Affective Explanations for Real-World Visual Data [50.28825017427716]
We introduce and share with the research community a large-scale dataset that contains emotional reactions and free-form textual explanations for 85,007 publicly available images.
We show that there is significant common ground to capture potentially plausible emotional responses with a large support in the subject population.
Our work paves the way for richer, more human-centric, and emotionally-aware image analysis systems.
arXiv Detail & Related papers (2022-10-04T22:44:17Z) - Imagination-Augmented Natural Language Understanding [71.51687221130925]
We introduce an Imagination-Augmented Cross-modal (iACE) to solve natural language understanding tasks.
iACE enables visual imagination with external knowledge transferred from the powerful generative and pre-trained vision-and-language models.
Experiments on GLUE and SWAG show that iACE achieves consistent improvement over visually-supervised pre-trained models.
arXiv Detail & Related papers (2022-04-18T19:39:36Z) - Dark, Beyond Deep: A Paradigm Shift to Cognitive AI with Humanlike
Common Sense [142.53911271465344]
We argue that the next generation of AI must embrace "dark" humanlike common sense for solving novel tasks.
We identify functionality, physics, intent, causality, and utility (FPICU) as the five core domains of cognitive AI with humanlike common sense.
arXiv Detail & Related papers (2020-04-20T04:07:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.