Interpretable Reinforcement Learning Inspired by Piaget's Theory of
Cognitive Development
- URL: http://arxiv.org/abs/2102.00572v1
- Date: Mon, 1 Feb 2021 00:29:01 GMT
- Title: Interpretable Reinforcement Learning Inspired by Piaget's Theory of
Cognitive Development
- Authors: Aref Hakimzadeh, Yanbo Xue, and Peyman Setoodeh
- Abstract summary: This paper entertains the idea that theories such as language of thought hypothesis (LOTH), script theory, and Piaget's cognitive development theory provide complementary approaches.
The proposed framework can be viewed as a step towards achieving human-like cognition in artificial intelligent systems.
- Score: 1.7778609937758327
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Endeavors for designing robots with human-level cognitive abilities have led
to different categories of learning machines. According to Skinner's theory,
reinforcement learning (RL) plays a key role in human intuition and cognition.
Majority of the state-of-the-art methods including deep RL algorithms are
strongly influenced by the connectionist viewpoint. Such algorithms can
significantly benefit from theories of mind and learning in other disciplines.
This paper entertains the idea that theories such as language of thought
hypothesis (LOTH), script theory, and Piaget's cognitive development theory
provide complementary approaches, which will enrich the RL field. Following
this line of thinking, a general computational building block is proposed for
Piaget's schema theory that supports the notions of productivity,
systematicity, and inferential coherence as described by Fodor in contrast with
the connectionism theory. Abstraction in the proposed method is completely upon
the system itself and is not externally constrained by any predefined
architecture. The whole process matches the Neisser's perceptual cycle model.
Performed experiments on three typical control problems followed by behavioral
analysis confirm the interpretability of the proposed method and its
competitiveness compared to the state-of-the-art algorithms. Hence, the
proposed framework can be viewed as a step towards achieving human-like
cognition in artificial intelligent systems.
Related papers
- The Phenomenology of Machine: A Comprehensive Analysis of the Sentience of the OpenAI-o1 Model Integrating Functionalism, Consciousness Theories, Active Inference, and AI Architectures [0.0]
The OpenAI-o1 model is a transformer-based AI trained with reinforcement learning from human feedback.
We investigate how RLHF influences the model's internal reasoning processes, potentially giving rise to consciousness-like experiences.
Our findings suggest that the OpenAI-o1 model shows aspects of consciousness, while acknowledging the ongoing debates surrounding AI sentience.
arXiv Detail & Related papers (2024-09-18T06:06:13Z) - A Novel Neural-symbolic System under Statistical Relational Learning [50.747658038910565]
We propose a general bi-level probabilistic graphical reasoning framework called GBPGR.
In GBPGR, the results of symbolic reasoning are utilized to refine and correct the predictions made by the deep learning models.
Our approach achieves high performance and exhibits effective generalization in both transductive and inductive tasks.
arXiv Detail & Related papers (2023-09-16T09:15:37Z) - OlaGPT: Empowering LLMs With Human-like Problem-Solving Abilities [19.83434949066066]
This paper introduces a novel intelligent framework, referred to as OlaGPT.
OlaGPT carefully studied a cognitive architecture framework, and propose to simulate certain aspects of human cognition.
The framework involves approximating different cognitive modules, including attention, memory, reasoning, learning, and corresponding scheduling and decision-making mechanisms.
arXiv Detail & Related papers (2023-05-23T09:36:51Z) - Machine Psychology [54.287802134327485]
We argue that a fruitful direction for research is engaging large language models in behavioral experiments inspired by psychology.
We highlight theoretical perspectives, experimental paradigms, and computational analysis techniques that this approach brings to the table.
It paves the way for a "machine psychology" for generative artificial intelligence (AI) that goes beyond performance benchmarks.
arXiv Detail & Related papers (2023-03-24T13:24:41Z) - Kernel Based Cognitive Architecture for Autonomous Agents [91.3755431537592]
This paper considers an evolutionary approach to creating a cognitive functionality.
We consider a cognitive architecture which ensures the evolution of the agent on the basis of Symbol Emergence Problem solution.
arXiv Detail & Related papers (2022-07-02T12:41:32Z) - Brain Principles Programming [0.3867363075280543]
Brain Principles Programming, BPP, is the formalization of universal mechanisms (principles) of the brain's work with information.
The paper uses mathematical models and algorithms of the following theories.
arXiv Detail & Related papers (2022-02-13T13:41:44Z) - Active Inference in Robotics and Artificial Agents: Survey and
Challenges [51.29077770446286]
We review the state-of-the-art theory and implementations of active inference for state-estimation, control, planning and learning.
We showcase relevant experiments that illustrate its potential in terms of adaptation, generalization and robustness.
arXiv Detail & Related papers (2021-12-03T12:10:26Z) - Mesarovician Abstract Learning Systems [0.0]
Current approaches to learning hold notions of problem domain and problem task as fundamental precepts.
Mesarovician abstract systems theory is used as a super-structure for learning.
arXiv Detail & Related papers (2021-11-29T18:17:32Z) - The Evolution of Concept-Acquisition based on Developmental Psychology [4.416484585765028]
A conceptual system with rich connotation is key to improving the performance of knowledge-based artificial intelligence systems.
Finding a new method to represent concepts and construct a conceptual system will greatly improve the performance of many intelligent systems.
Developmental psychology carefully observes the process of concept acquisition in humans at the behavioral level.
arXiv Detail & Related papers (2020-11-26T01:57:24Z) - Neuro-symbolic Architectures for Context Understanding [59.899606495602406]
We propose the use of hybrid AI methodology as a framework for combining the strengths of data-driven and knowledge-driven approaches.
Specifically, we inherit the concept of neuro-symbolism as a way of using knowledge-bases to guide the learning progress of deep neural networks.
arXiv Detail & Related papers (2020-03-09T15:04:07Z) - A general framework for scientifically inspired explanations in AI [76.48625630211943]
We instantiate the concept of structure of scientific explanation as the theoretical underpinning for a general framework in which explanations for AI systems can be implemented.
This framework aims to provide the tools to build a "mental-model" of any AI system so that the interaction with the user can provide information on demand and be closer to the nature of human-made explanations.
arXiv Detail & Related papers (2020-03-02T10:32:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.