An Artificial Consciousness Model and its relations with Philosophy of
Mind
- URL: http://arxiv.org/abs/2011.14475v2
- Date: Tue, 1 Dec 2020 17:27:10 GMT
- Title: An Artificial Consciousness Model and its relations with Philosophy of
Mind
- Authors: Eduardo C. Garrido-Merch\'an and Martin Molina and Francisco M.
Mendoza
- Abstract summary: This work seeks to study the beneficial properties that an autonomous agent can obtain by implementing a cognitive architecture similar to the one of conscious beings.
We show in a large experiment set how an autonomous agent can benefit from having a cognitive architecture such as the one described.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This work seeks to study the beneficial properties that an autonomous agent
can obtain by implementing a cognitive architecture similar to the one of
conscious beings. Along this document, a conscious model of autonomous agent
based in a global workspace architecture is presented. We describe how this
agent is viewed from different perspectives of philosophy of mind, being
inspired by their ideas. The goal of this model is to create autonomous agents
able to navigate within an environment composed of multiple independent
magnitudes, adapting to its surroundings in order to find the best possible
position in base of its inner preferences. The purpose of the model is to test
the effectiveness of many cognitive mechanisms that are incorporated, such as
an attention mechanism for magnitude selection, pos-session of inner feelings
and preferences, usage of a memory system to storage beliefs and past
experiences, and incorporating a global workspace which controls and integrates
information processed by all the subsystem of the model. We show in a large
experiment set how an autonomous agent can benefit from having a cognitive
architecture such as the one described.
Related papers
- Probing for Consciousness in Machines [3.196204482566275]
This study explores the potential for artificial agents to develop core consciousness.
The emergence of core consciousness relies on the integration of a self model, informed by representations of emotions and feelings, and a world model.
Our results demonstrate that the agent can form rudimentary world and self models, suggesting a pathway toward developing machine consciousness.
arXiv Detail & Related papers (2024-11-25T10:27:07Z) - Emergence of Implicit World Models from Mortal Agents [0.276240219662896]
We discuss the possibility of world models and active exploration as emergent properties of open-ended behavior optimization in autonomous agents.
In discussing the source of the open-endedness of living things, we start from the perspective of biological systems as understood by the mechanistic approach of theoretical biology and artificial life.
arXiv Detail & Related papers (2024-11-19T07:43:30Z) - Position Paper: Agent AI Towards a Holistic Intelligence [53.35971598180146]
We emphasize developing Agent AI -- an embodied system that integrates large foundation models into agent actions.
In this paper, we propose a novel large action model to achieve embodied intelligent behavior, the Agent Foundation Model.
arXiv Detail & Related papers (2024-02-28T16:09:56Z) - Agent AI: Surveying the Horizons of Multimodal Interaction [83.18367129924997]
"Agent AI" is a class of interactive systems that can perceive visual stimuli, language inputs, and other environmentally-grounded data.
We envision a future where people can easily create any virtual reality or simulated scene and interact with agents embodied within the virtual environment.
arXiv Detail & Related papers (2024-01-07T19:11:18Z) - Incremental procedural and sensorimotor learning in cognitive humanoid
robots [52.77024349608834]
This work presents a cognitive agent that can learn procedures incrementally.
We show the cognitive functions required in each substage and how adding new functions helps address tasks previously unsolved by the agent.
Results show that this approach is capable of solving complex tasks incrementally.
arXiv Detail & Related papers (2023-04-30T22:51:31Z) - Kernel Based Cognitive Architecture for Autonomous Agents [91.3755431537592]
This paper considers an evolutionary approach to creating a cognitive functionality.
We consider a cognitive architecture which ensures the evolution of the agent on the basis of Symbol Emergence Problem solution.
arXiv Detail & Related papers (2022-07-02T12:41:32Z) - Cognitive architecture aided by working-memory for self-supervised
multi-modal humans recognition [54.749127627191655]
The ability to recognize human partners is an important social skill to build personalized and long-term human-robot interactions.
Deep learning networks have achieved state-of-the-art results and demonstrated to be suitable tools to address such a task.
One solution is to make robots learn from their first-hand sensory data with self-supervision.
arXiv Detail & Related papers (2021-03-16T13:50:24Z) - Controlling Synthetic Characters in Simulations: A Case for Cognitive
Architectures and Sigma [0.0]
Simulations require computational models of intelligence that generate realistic and credible behavior for the participating synthetic characters.
Sigma is a cognitive architecture and system that strives to combine what has been learned from four decades of independent work on symbolic cognitive architectures, probabilistic graphical models, and more recently neural models, under its graphical architecture hypothesis.
In this paper, we will introduce Sigma along with its diverse capabilities and then use three distinct proof-of-concept Sigma models to highlight combinations of these capabilities.
arXiv Detail & Related papers (2021-01-06T19:07:36Z) - Learning intuitive physics and one-shot imitation using
state-action-prediction self-organizing maps [0.0]
Humans learn by exploration and imitation, build causal models of the world, and use both to flexibly solve new tasks.
We suggest a simple but effective unsupervised model which develops such characteristics.
We demonstrate its performance on a set of several related, but different one-shot imitation tasks, which the agent flexibly solves in an active inference style.
arXiv Detail & Related papers (2020-07-03T12:29:11Z) - Mutual Information-based State-Control for Intrinsically Motivated
Reinforcement Learning [102.05692309417047]
In reinforcement learning, an agent learns to reach a set of goals by means of an external reward signal.
In the natural world, intelligent organisms learn from internal drives, bypassing the need for external signals.
We propose to formulate an intrinsic objective as the mutual information between the goal states and the controllable states.
arXiv Detail & Related papers (2020-02-05T19:21:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.