Informational non-reductionist theory of consciousness that providing
maximum accuracy of reality prediction
- URL: http://arxiv.org/abs/2401.00004v1
- Date: Sun, 10 Dec 2023 13:27:10 GMT
- Title: Informational non-reductionist theory of consciousness that providing
maximum accuracy of reality prediction
- Authors: E.E. Vityaev
- Abstract summary: The paper considers a non-reductionist theory of consciousness, which is not reducible to theories of reality and to physiological or psychological theories.
The principle of the Information Theory of Consciousness (ITS) development is put forward.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: The paper considers a non-reductionist theory of consciousness, which is not
reducible to theories of reality and to physiological or psychological
theories. Following D.I.Dubrovsky's "informational approach" to the "Mind-Brain
Problem", we consider the reality through the prism of information about
observed phenomena, which, in turn, is perceived by subjective reality through
sensations, perceptions, feelings, etc., which, in turn, are information about
the corresponding brain processes. Within this framework the following
principle of the Information Theory of Consciousness (ITS) development is put
forward: the brain discovers all possible causal relations in the external
world and makes all possible inferences by them. The paper shows that ITS built
on this principle: (1) also base on the information laws of the structure of
external world; (2) explains the structure and functioning of the brain
functional systems and cellular ensembles; (3) ensures maximum accuracy of
predictions and the anticipation of reality; (4) resolves emerging
contradictions and (5) is an information theory of the brain's reflection of
reality.
Related papers
- Consciousness defined: requirements for biological and artificial general intelligence [0.0]
Critically, consciousness is the apparatus that provides the ability to make decisions, but it is not defined by the decision itself.
requirements for consciousness include: at least some capability for perception, a memory for the storage of such perceptual information.
We can objectively determine consciousness in any conceivable agent, such as non-human animals and artificially intelligent systems.
arXiv Detail & Related papers (2024-06-03T14:20:56Z) - Neuromorphic Correlates of Artificial Consciousness [1.4957306171002251]
The concept of neural correlates of consciousness (NCC) suggests that specific neural activities are linked to conscious experiences.
This paper explores the potential for artificial consciousness by merging neuromorphic design and architecture with brain simulations.
arXiv Detail & Related papers (2024-05-03T09:27:51Z) - Neural Causal Abstractions [63.21695740637627]
We develop a new family of causal abstractions by clustering variables and their domains.
We show that such abstractions are learnable in practical settings through Neural Causal Models.
Our experiments support the theory and illustrate how to scale causal inferences to high-dimensional settings involving image data.
arXiv Detail & Related papers (2024-01-05T02:00:27Z) - Survey of Consciousness Theory from Computational Perspective [8.521492577054078]
This paper surveys several main branches of consciousness theories originating from different subjects.
It also discusses the existing evaluation metrics of consciousness and possibility for current computational models to be conscious.
arXiv Detail & Related papers (2023-09-18T18:23:58Z) - Causal potency of consciousness in the physical world [0.0]
An attempt to construct a functional theory of the conscious mind within the framework of classical physics leads to causally impotent conscious experiences.
We show that a mind--brain theory consistent with causally potent conscious experiences is provided by modern quantum physics.
arXiv Detail & Related papers (2023-06-26T13:55:33Z) - Intrinsic Physical Concepts Discovery with Object-Centric Predictive
Models [86.25460882547581]
We introduce the PHYsical Concepts Inference NEtwork (PHYCINE), a system that infers physical concepts in different abstract levels without supervision.
We show that object representations containing the discovered physical concepts variables could help achieve better performance in causal reasoning tasks.
arXiv Detail & Related papers (2023-03-03T11:52:21Z) - Sources of Richness and Ineffability for Phenomenally Conscious States [57.8137804587998]
We provide an information theoretic dynamical systems perspective on the richness and ineffability of consciousness.
In our framework, the richness of conscious experience corresponds to the amount of information in a conscious state.
While our model may not settle all questions relating to the explanatory gap, it makes progress toward a fully physicalist explanation.
arXiv Detail & Related papers (2023-02-13T14:41:04Z) - Memory-Augmented Theory of Mind Network [59.9781556714202]
Social reasoning requires the capacity of theory of mind (ToM) to contextualise and attribute mental states to others.
Recent machine learning approaches to ToM have demonstrated that we can train the observer to read the past and present behaviours of other agents.
We tackle the challenges by equipping the observer with novel neural memory mechanisms to encode, and hierarchical attention to selectively retrieve information about others.
This results in ToMMY, a theory of mind model that learns to reason while making little assumptions about the underlying mental processes.
arXiv Detail & Related papers (2023-01-17T14:48:58Z) - Neural Theory-of-Mind? On the Limits of Social Intelligence in Large LMs [77.88043871260466]
We show that one of today's largest language models lacks this kind of social intelligence out-of-the box.
We conclude that person-centric NLP approaches might be more effective towards neural Theory of Mind.
arXiv Detail & Related papers (2022-10-24T14:58:58Z) - Quantum information theoretic approach to the mind-brain problem [0.0]
In classical physics, addressing the mind-brain problem is a formidable task.
No physical mechanism is able to explain how the brain generates the unobservable, inner psychological world of conscious experiences.
Modern quantum physics affirms the interplay between two types of physical entities in Hilbert space.
arXiv Detail & Related papers (2020-12-13T09:07:33Z) - Inductive Biases for Deep Learning of Higher-Level Cognition [108.89281493851358]
A fascinating hypothesis is that human and animal intelligence could be explained by a few principles.
This work considers a larger list, focusing on those which concern mostly higher-level and sequential conscious processing.
The objective of clarifying these particular principles is that they could potentially help us build AI systems benefiting from humans' abilities.
arXiv Detail & Related papers (2020-11-30T18:29:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.