A Machine Consciousness architecture based on Deep Learning and Gaussian
Processes
- URL: http://arxiv.org/abs/2002.00509v2
- Date: Sat, 14 Mar 2020 00:01:23 GMT
- Title: A Machine Consciousness architecture based on Deep Learning and Gaussian
Processes
- Authors: Eduardo C. Garrido Merch\'an, Mart\'in Molina
- Abstract summary: We propose an architecture that may arise consciousness in a machine based on the global workspace theory.
This architecture is based on processes that use the recent developments in artificial intelligence models which output are these correlated activities.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recent developments in machine learning have pushed the tasks that machines
can do outside the boundaries of what was thought to be possible years ago.
Methodologies such as deep learning or generative models have achieved complex
tasks such as generating art pictures or literature automatically. On the other
hand, symbolic resources have also been developed further and behave well in
problems such as the ones proposed by common sense reasoning. Machine
Consciousness is a field that has been deeply studied and several theories
based in the functionalism philosophical theory like the global workspace
theory or information integration have been proposed that try to explain the
ariseness of consciousness in machines. In this work, we propose an
architecture that may arise consciousness in a machine based in the global
workspace theory and in the assumption that consciousness appear in machines
that has cognitive processes and exhibit conscious behaviour. This architecture
is based in processes that use the recent developments in artificial
intelligence models which output are these correlated activities. For every one
of the modules of this architecture, we provide detailed explanations of the
models involved and how they communicate with each other to create the
cognitive architecture.
Related papers
- The Phenomenology of Machine: A Comprehensive Analysis of the Sentience of the OpenAI-o1 Model Integrating Functionalism, Consciousness Theories, Active Inference, and AI Architectures [0.0]
The OpenAI-o1 model is a transformer-based AI trained with reinforcement learning from human feedback.
We investigate how RLHF influences the model's internal reasoning processes, potentially giving rise to consciousness-like experiences.
Our findings suggest that the OpenAI-o1 model shows aspects of consciousness, while acknowledging the ongoing debates surrounding AI sentience.
arXiv Detail & Related papers (2024-09-18T06:06:13Z) - Brain-Inspired Machine Intelligence: A Survey of
Neurobiologically-Plausible Credit Assignment [65.268245109828]
We examine algorithms for conducting credit assignment in artificial neural networks that are inspired or motivated by neurobiology.
We organize the ever-growing set of brain-inspired learning schemes into six general families and consider these in the context of backpropagation of errors.
The results of this review are meant to encourage future developments in neuro-mimetic systems and their constituent learning processes.
arXiv Detail & Related papers (2023-12-01T05:20:57Z) - A Neuro-mimetic Realization of the Common Model of Cognition via Hebbian
Learning and Free Energy Minimization [55.11642177631929]
Large neural generative models are capable of synthesizing semantically rich passages of text or producing complex images.
We discuss the COGnitive Neural GENerative system, such an architecture that casts the Common Model of Cognition.
arXiv Detail & Related papers (2023-10-14T23:28:48Z) - Brain-Inspired Computational Intelligence via Predictive Coding [89.6335791546526]
Predictive coding (PC) has shown promising performance in machine intelligence tasks.
PC can model information processing in different brain areas, can be used in cognitive control and robotics.
arXiv Detail & Related papers (2023-08-15T16:37:16Z) - Non-equilibrium physics: from spin glasses to machine and neural
learning [0.0]
Disordered many-body systems exhibit a wide range of emergent phenomena across different scales.
We aim to characterize such emergent intelligence in disordered systems through statistical physics.
We uncover relationships between learning mechanisms and physical dynamics that could serve as guiding principles for designing intelligent systems.
arXiv Detail & Related papers (2023-08-03T04:56:47Z) - Machine Psychology [54.287802134327485]
We argue that a fruitful direction for research is engaging large language models in behavioral experiments inspired by psychology.
We highlight theoretical perspectives, experimental paradigms, and computational analysis techniques that this approach brings to the table.
It paves the way for a "machine psychology" for generative artificial intelligence (AI) that goes beyond performance benchmarks.
arXiv Detail & Related papers (2023-03-24T13:24:41Z) - The Objective Function: Science and Society in the Age of Machine
Intelligence [0.0]
Machine intelligence has been applied to domains as disparate as criminal justice, commerce, medicine, media and the arts, mechanical engineering.
This dissertation examines the workplace practices of the applied machine learning researchers who produce machine intelligence.
The dissertation also examines how machine intelligence depends upon a range of accommodations from other institutions and organizations.
arXiv Detail & Related papers (2022-09-21T15:05:54Z) - Foundations and Recent Trends in Multimodal Machine Learning:
Principles, Challenges, and Open Questions [68.6358773622615]
This paper provides an overview of the computational and theoretical foundations of multimodal machine learning.
We propose a taxonomy of 6 core technical challenges: representation, alignment, reasoning, generation, transference, and quantification.
Recent technical achievements will be presented through the lens of this taxonomy, allowing researchers to understand the similarities and differences across new approaches.
arXiv Detail & Related papers (2022-09-07T19:21:19Z) - Towards a Predictive Processing Implementation of the Common Model of
Cognition [79.63867412771461]
We describe an implementation of the common model of cognition grounded in neural generative coding and holographic associative memory.
The proposed system creates the groundwork for developing agents that learn continually from diverse tasks as well as model human performance at larger scales.
arXiv Detail & Related papers (2021-05-15T22:55:23Z) - Machine learning and deep learning [0.0]
Machine learning describes the capacity of systems to learn from problem-specific training data.
Deep learning is a machine learning concept based on artificial neural networks.
arXiv Detail & Related papers (2021-04-12T09:54:12Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.