Interpretable Reinforcement Learning Inspired by Piaget's Theory of
Cognitive Development
- URL: http://arxiv.org/abs/2102.00572v1
- Date: Mon, 1 Feb 2021 00:29:01 GMT
- Title: Interpretable Reinforcement Learning Inspired by Piaget's Theory of
Cognitive Development
- Authors: Aref Hakimzadeh, Yanbo Xue, and Peyman Setoodeh
- Abstract summary: This paper entertains the idea that theories such as language of thought hypothesis (LOTH), script theory, and Piaget's cognitive development theory provide complementary approaches.
The proposed framework can be viewed as a step towards achieving human-like cognition in artificial intelligent systems.
- Score: 1.7778609937758327
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Endeavors for designing robots with human-level cognitive abilities have led
to different categories of learning machines. According to Skinner's theory,
reinforcement learning (RL) plays a key role in human intuition and cognition.
Majority of the state-of-the-art methods including deep RL algorithms are
strongly influenced by the connectionist viewpoint. Such algorithms can
significantly benefit from theories of mind and learning in other disciplines.
This paper entertains the idea that theories such as language of thought
hypothesis (LOTH), script theory, and Piaget's cognitive development theory
provide complementary approaches, which will enrich the RL field. Following
this line of thinking, a general computational building block is proposed for
Piaget's schema theory that supports the notions of productivity,
systematicity, and inferential coherence as described by Fodor in contrast with
the connectionism theory. Abstraction in the proposed method is completely upon
the system itself and is not externally constrained by any predefined
architecture. The whole process matches the Neisser's perceptual cycle model.
Performed experiments on three typical control problems followed by behavioral
analysis confirm the interpretability of the proposed method and its
competitiveness compared to the state-of-the-art algorithms. Hence, the
proposed framework can be viewed as a step towards achieving human-like
cognition in artificial intelligent systems.
Related papers
- A Novel Neural-symbolic System under Statistical Relational Learning [50.747658038910565]
We propose a general bi-level probabilistic graphical reasoning framework called GBPGR.
In GBPGR, the results of symbolic reasoning are utilized to refine and correct the predictions made by the deep learning models.
Our approach achieves high performance and exhibits effective generalization in both transductive and inductive tasks.
arXiv Detail & Related papers (2023-09-16T09:15:37Z) - Minding Language Models' (Lack of) Theory of Mind: A Plug-and-Play
Multi-Character Belief Tracker [72.09076317574238]
ToM is a plug-and-play approach to investigate the belief states of characters in reading comprehension.
We show that ToM enhances off-the-shelf neural network theory mind in a zero-order setting while showing robust out-of-distribution performance compared to supervised baselines.
arXiv Detail & Related papers (2023-06-01T17:24:35Z) - OlaGPT: Empowering LLMs With Human-like Problem-Solving Abilities [19.83434949066066]
This paper introduces a novel intelligent framework, referred to as OlaGPT.
OlaGPT carefully studied a cognitive architecture framework, and propose to simulate certain aspects of human cognition.
The framework involves approximating different cognitive modules, including attention, memory, reasoning, learning, and corresponding scheduling and decision-making mechanisms.
arXiv Detail & Related papers (2023-05-23T09:36:51Z) - Kernel Based Cognitive Architecture for Autonomous Agents [91.3755431537592]
This paper considers an evolutionary approach to creating a cognitive functionality.
We consider a cognitive architecture which ensures the evolution of the agent on the basis of Symbol Emergence Problem solution.
arXiv Detail & Related papers (2022-07-02T12:41:32Z) - A Quantitative Symbolic Approach to Individual Human Reasoning [0.0]
We take findings from the literature and show how these, formalized as cognitive principles within a logical framework, can establish a quantitative notion of reasoning.
We employ techniques from non-monotonic reasoning and computer science, namely, a solving paradigm called answer set programming (ASP)
Finally, we can fruitfully use plausibility reasoning in ASP to test the effects of an existing experiment and explain different majority responses.
arXiv Detail & Related papers (2022-05-10T16:43:47Z) - Brain Principles Programming [0.3867363075280543]
Brain Principles Programming, BPP, is the formalization of universal mechanisms (principles) of the brain's work with information.
The paper uses mathematical models and algorithms of the following theories.
arXiv Detail & Related papers (2022-02-13T13:41:44Z) - Active Inference in Robotics and Artificial Agents: Survey and
Challenges [51.29077770446286]
We review the state-of-the-art theory and implementations of active inference for state-estimation, control, planning and learning.
We showcase relevant experiments that illustrate its potential in terms of adaptation, generalization and robustness.
arXiv Detail & Related papers (2021-12-03T12:10:26Z) - Mesarovician Abstract Learning Systems [0.0]
Current approaches to learning hold notions of problem domain and problem task as fundamental precepts.
Mesarovician abstract systems theory is used as a super-structure for learning.
arXiv Detail & Related papers (2021-11-29T18:17:32Z) - The Evolution of Concept-Acquisition based on Developmental Psychology [4.416484585765028]
A conceptual system with rich connotation is key to improving the performance of knowledge-based artificial intelligence systems.
Finding a new method to represent concepts and construct a conceptual system will greatly improve the performance of many intelligent systems.
Developmental psychology carefully observes the process of concept acquisition in humans at the behavioral level.
arXiv Detail & Related papers (2020-11-26T01:57:24Z) - Neuro-symbolic Architectures for Context Understanding [59.899606495602406]
We propose the use of hybrid AI methodology as a framework for combining the strengths of data-driven and knowledge-driven approaches.
Specifically, we inherit the concept of neuro-symbolism as a way of using knowledge-bases to guide the learning progress of deep neural networks.
arXiv Detail & Related papers (2020-03-09T15:04:07Z) - A general framework for scientifically inspired explanations in AI [76.48625630211943]
We instantiate the concept of structure of scientific explanation as the theoretical underpinning for a general framework in which explanations for AI systems can be implemented.
This framework aims to provide the tools to build a "mental-model" of any AI system so that the interaction with the user can provide information on demand and be closer to the nature of human-made explanations.
arXiv Detail & Related papers (2020-03-02T10:32:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.