Testing the Machine Consciousness Hypothesis
- URL: http://arxiv.org/abs/2512.01081v1
- Date: Sun, 30 Nov 2025 21:05:48 GMT
- Title: Testing the Machine Consciousness Hypothesis
- Authors: Stephen Fitz,
- Abstract summary: The Machine Consciousness Hypothesis states that consciousness is a substrate-free functional property of computational systems.<n>I propose a research program to investigate this idea in silico by studying how collective self-models emerge from distributed learning systems.
- Score: 0.6426115997581661
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The Machine Consciousness Hypothesis states that consciousness is a substrate-free functional property of computational systems capable of second-order perception. I propose a research program to investigate this idea in silico by studying how collective self-models (coherent, self-referential representations) emerge from distributed learning systems embedded within universal self-organizing environments. The theory outlined here starts from the supposition that consciousness is an emergent property of collective intelligence systems undergoing synchronization of prediction through communication. It is not an epiphenomenon of individual modeling but a property of the language that a system evolves to internally describe itself. For a model of base reality, I begin with a minimal but general computational world: a cellular automaton, which exhibits both computational irreducibility and local reducibility. On top of this computational substrate, I introduce a network of local, predictive, representational (neural) models capable of communication and adaptation. I use this layered model to study how collective intelligence gives rise to self-representation as a direct consequence of inter-agent alignment. I suggest that consciousness does not emerge from modeling per se, but from communication. It arises from the noisy, lossy exchange of predictive messages between groups of local observers describing persistent patterns in the underlying computational substrate (base reality). It is through this representational dialogue that a shared model arises, aligning many partial views of the world. The broader goal is to develop empirically testable theories of machine consciousness, by studying how internal self-models may form in distributed systems without centralized control.
Related papers
- Exploration Through Introspection: A Self-Aware Reward Model [0.0]
Evidence points to a unified system for self- and other-awareness.<n>We explore this self-awareness by having reinforcement learning agents infer their own internal states in gridworld environments.
arXiv Detail & Related papers (2026-01-06T19:53:33Z) - Automatic Minds: Cognitive Parallels Between Hypnotic States and Large Language Model Processing [0.0]
The cognitive processes of the hypnotized mind and the computational operations of large language models share deep functional parallels.<n>Both systems generate sophisticated, contextually appropriate behavior through automatic pattern-completion mechanisms.<n>The future of reliable AI lies in hybrid architectures that integrate generative fluency with mechanisms of executive monitoring.
arXiv Detail & Related papers (2025-11-03T09:08:50Z) - Concept-Guided Interpretability via Neural Chunking [64.6429903327095]
We show that neural networks exhibit patterns in their raw population activity that mirror regularities in the training data.<n>We propose three methods to extract recurring chunks on a neural population level.<n>Our work points to a new direction for interpretability, one that harnesses both cognitive principles and the structure of naturalistic data.
arXiv Detail & Related papers (2025-05-16T13:49:43Z) - Computational Irreducibility as the Foundation of Agency: A Formal Model Connecting Undecidability to Autonomous Behavior in Complex Systems [0.0]
we establish precise mathematical connections, proving that for any truly autonomous system, questions about its future behavior are fundamentally undecidable.<n>The findings have significant implications for artificial intelligence, biological modeling, and philosophical concepts like free will.
arXiv Detail & Related papers (2025-05-05T21:24:50Z) - Meta-Representational Predictive Coding: Biomimetic Self-Supervised Learning [51.22185316175418]
We present a new form of predictive coding that we call meta-representational predictive coding (MPC)<n>MPC sidesteps the need for learning a generative model of sensory input by learning to predict representations of sensory input across parallel streams.
arXiv Detail & Related papers (2025-03-22T22:13:14Z) - EgoAgent: A Joint Predictive Agent Model in Egocentric Worlds [119.02266432167085]
We propose EgoAgent, a unified agent model that simultaneously learns to represent, predict, and act within a single transformer.<n>EgoAgent explicitly models the causal and temporal dependencies among these abilities by formulating the task as an interleaved sequence of states and actions.<n> Comprehensive evaluations of EgoAgent on representative tasks such as image classification, egocentric future state prediction, and 3D human motion prediction demonstrate the superiority of our method.
arXiv Detail & Related papers (2025-02-09T11:28:57Z) - From Word Models to World Models: Translating from Natural Language to
the Probabilistic Language of Thought [124.40905824051079]
We propose rational meaning construction, a computational framework for language-informed thinking.
We frame linguistic meaning as a context-sensitive mapping from natural language into a probabilistic language of thought.
We show that LLMs can generate context-sensitive translations that capture pragmatically-appropriate linguistic meanings.
We extend our framework to integrate cognitively-motivated symbolic modules.
arXiv Detail & Related papers (2023-06-22T05:14:00Z) - Grid-SD2E: A General Grid-Feedback in a System for Cognitive Learning [0.5221459608786241]
This study is inspired in part by grid cells in creating a more general and robust grid module.
We construct an interactive and self-reinforcing cognitive system together with Bayesian reasoning.
The smallest computing unit is extracted, which is analogous to a single neuron in the brain.
arXiv Detail & Related papers (2023-04-04T14:54:12Z) - Emergence of Machine Language: Towards Symbolic Intelligence with Neural
Networks [73.94290462239061]
We propose to combine symbolism and connectionism principles by using neural networks to derive a discrete representation.
By designing an interactive environment and task, we demonstrated that machines could generate a spontaneous, flexible, and semantic language.
arXiv Detail & Related papers (2022-01-14T14:54:58Z) - Towards a Neural Model for Serial Order in Frontal Cortex: a Brain
Theory from Memory Development to Higher-Level Cognition [53.816853325427424]
We propose that the immature prefrontal cortex (PFC) use its primary functionality of detecting hierarchical patterns in temporal signals.
Our hypothesis is that the PFC detects the hierarchical structure in temporal sequences in the form of ordinal patterns and use them to index information hierarchically in different parts of the brain.
By doing so, it gives the tools to the language-ready brain for manipulating abstract knowledge and planning temporally ordered information.
arXiv Detail & Related papers (2020-05-22T14:29:51Z) - Brain-inspired self-organization with cellular neuromorphic computing
for multimodal unsupervised learning [0.0]
We propose a brain-inspired neural system based on the reentry theory using Self-Organizing Maps and Hebbian-like learning.
We show the gain of the so-called hardware plasticity induced by the ReSOM, where the system's topology is not fixed by the user but learned along the system's experience through self-organization.
arXiv Detail & Related papers (2020-04-11T21:02:45Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.