Grounded Computation & Consciousness: A Framework for Exploring Consciousness in Machines & Other Organisms
- URL: http://arxiv.org/abs/2409.16036v1
- Date: Tue, 24 Sep 2024 12:34:05 GMT
- Title: Grounded Computation & Consciousness: A Framework for Exploring Consciousness in Machines & Other Organisms
- Authors: Ryan Williams,
- Abstract summary: This paper discusses the necessity for an ontological basis of consciousness, and introduces a formal framework for grounding computational descriptions into an ontological substrate.
A method is demonstrated for estimating the difference in qualitative experience between two systems.
- Score: 2.9280059958992286
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Computational modeling is a critical tool for understanding consciousness, but is it enough on its own? This paper discusses the necessity for an ontological basis of consciousness, and introduces a formal framework for grounding computational descriptions into an ontological substrate. Utilizing this technique, a method is demonstrated for estimating the difference in qualitative experience between two systems. This framework has wide applicability to computational theories of consciousness.
Related papers
- Why the Brain Cannot Be a Digital Computer: History-Dependence and the Computational Limits of Consciousness [0.0]
We show that the human brain as currently understood cannot function as a classical digital computer.
Our analysis calculates the bit-length requirements for representing consciously distinguishable sensory "stimulus frames"
arXiv Detail & Related papers (2025-03-13T16:27:42Z) - Exploring Cognition through Morphological Info-Computational Framework [1.14219428942199]
Information and computation are inseparably connected with cognition.
This chapter explores research connecting nature as a computational structure for a cognizer.
Understanding the embodiment of cognition through its morphological computational basis is crucial for biology, evolution, intelligence theory, AI, robotics, and other fields.
arXiv Detail & Related papers (2024-12-01T09:56:38Z) - Neuropsychology and Explainability of AI: A Distributional Approach to the Relationship Between Activation Similarity of Neural Categories in Synthetic Cognition [0.11235145048383502]
We propose an approach to explainability of artificial neural networks that involves using concepts from human cognitive tokens.
We show that the categorical segment created by a neuron is actually the result of a superposition of categorical sub-dimensions within its input vector space.
arXiv Detail & Related papers (2024-10-23T05:27:09Z) - Preliminaries to artificial consciousness: a multidimensional heuristic approach [0.0]
The pursuit of artificial consciousness requires conceptual clarity to navigate its theoretical and empirical challenges.
This paper introduces a composite, multilevel, and multidimensional model of consciousness as a framework to guide research in this field.
arXiv Detail & Related papers (2024-03-29T13:47:47Z) - Brain-Inspired Machine Intelligence: A Survey of
Neurobiologically-Plausible Credit Assignment [65.268245109828]
We examine algorithms for conducting credit assignment in artificial neural networks that are inspired or motivated by neurobiology.
We organize the ever-growing set of brain-inspired learning schemes into six general families and consider these in the context of backpropagation of errors.
The results of this review are meant to encourage future developments in neuro-mimetic systems and their constituent learning processes.
arXiv Detail & Related papers (2023-12-01T05:20:57Z) - Active Inference in Robotics and Artificial Agents: Survey and
Challenges [51.29077770446286]
We review the state-of-the-art theory and implementations of active inference for state-estimation, control, planning and learning.
We showcase relevant experiments that illustrate its potential in terms of adaptation, generalization and robustness.
arXiv Detail & Related papers (2021-12-03T12:10:26Z) - Towards a Predictive Processing Implementation of the Common Model of
Cognition [79.63867412771461]
We describe an implementation of the common model of cognition grounded in neural generative coding and holographic associative memory.
The proposed system creates the groundwork for developing agents that learn continually from diverse tasks as well as model human performance at larger scales.
arXiv Detail & Related papers (2021-05-15T22:55:23Z) - Formalising Concepts as Grounded Abstractions [68.24080871981869]
This report shows how representation learning can be used to induce concepts from raw data.
The main technical goal of this report is to show how techniques from representation learning can be married with a lattice-theoretic formulation of conceptual spaces.
arXiv Detail & Related papers (2021-01-13T15:22:01Z) - Understanding understanding: a renormalization group inspired model of
(artificial) intelligence [0.0]
This paper is about the meaning of understanding in scientific and in artificial intelligent systems.
We give a mathematical definition of the understanding, where, contrary to the common wisdom, we define the probability space on the input set.
We show, how scientific understanding fits into this framework, and demonstrate, what is the difference between a scientific task and pattern recognition.
arXiv Detail & Related papers (2020-10-26T11:11:46Z) - Formalizing Falsification for Theories of Consciousness Across
Computational Hierarchies [0.0]
Integrated Information Theory (IIT) is widely regarded as the preeminent theory of consciousness.
Epistemological issues in the form of the "unfolding argument" have provided a refutation of IIT.
We show how IIT is simultaneously falsified at the finite-state automaton level and unfalsifiable at the state automaton level.
arXiv Detail & Related papers (2020-06-12T18:05:46Z) - Neuro-symbolic Architectures for Context Understanding [59.899606495602406]
We propose the use of hybrid AI methodology as a framework for combining the strengths of data-driven and knowledge-driven approaches.
Specifically, we inherit the concept of neuro-symbolism as a way of using knowledge-bases to guide the learning progress of deep neural networks.
arXiv Detail & Related papers (2020-03-09T15:04:07Z) - A general framework for scientifically inspired explanations in AI [76.48625630211943]
We instantiate the concept of structure of scientific explanation as the theoretical underpinning for a general framework in which explanations for AI systems can be implemented.
This framework aims to provide the tools to build a "mental-model" of any AI system so that the interaction with the user can provide information on demand and be closer to the nature of human-made explanations.
arXiv Detail & Related papers (2020-03-02T10:32:21Z) - The Mathematical Structure of Integrated Information Theory [0.0]
Integrated Information Theory is one of the leading models of consciousness.
It aims to describe both the quality and quantity of the conscious experience of a physical system, such as the brain, in a particular state.
arXiv Detail & Related papers (2020-02-18T15:44:02Z) - A Mathematical Framework for Consciousness in Neural Networks [0.0]
This paper presents a novel mathematical framework for bridging the explanatory gap between consciousness and its physical correlates.
We do not claim that qualia are singularities or that singularities "explain" why qualia feel as they do.
We establish a framework that recognizes qualia as phenomena inherently beyond reduction to complexity, computation, or information.
arXiv Detail & Related papers (2017-04-04T18:32:58Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.