Hierarchical principles of embodied reinforcement learning: A review
- URL: http://arxiv.org/abs/2012.10147v1
- Date: Fri, 18 Dec 2020 10:19:38 GMT
- Title: Hierarchical principles of embodied reinforcement learning: A review
- Authors: Manfred Eppe, Christian Gumbsch, Matthias Kerzel, Phuong D.H. Nguyen,
Martin V. Butz and Stefan Wermter
- Abstract summary: We show that all important cognitive mechanisms have been implemented independently in isolated computational architectures.
We expect our results to guide the development of more sophisticated cognitively inspired hierarchical methods.
- Score: 11.613306236691427
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Cognitive Psychology and related disciplines have identified several critical
mechanisms that enable intelligent biological agents to learn to solve complex
problems. There exists pressing evidence that the cognitive mechanisms that
enable problem-solving skills in these species build on hierarchical mental
representations. Among the most promising computational approaches to provide
comparable learning-based problem-solving abilities for artificial agents and
robots is hierarchical reinforcement learning. However, so far the existing
computational approaches have not been able to equip artificial agents with
problem-solving abilities that are comparable to intelligent animals, including
human and non-human primates, crows, or octopuses. Here, we first survey the
literature in Cognitive Psychology, and related disciplines, and find that many
important mental mechanisms involve compositional abstraction, curiosity, and
forward models. We then relate these insights with contemporary hierarchical
reinforcement learning methods, and identify the key machine intelligence
approaches that realise these mechanisms. As our main result, we show that all
important cognitive mechanisms have been implemented independently in isolated
computational architectures, and there is simply a lack of approaches that
integrate them appropriately. We expect our results to guide the development of
more sophisticated cognitively inspired hierarchical methods, so that future
artificial agents achieve a problem-solving performance on the level of
intelligent animals.
Related papers
- Imagining and building wise machines: The centrality of AI metacognition [78.76893632793497]
We argue that shortcomings stem from one overarching failure: AI systems lack wisdom.
While AI research has focused on task-level strategies, metacognition is underdeveloped in AI systems.
We propose that integrating metacognitive capabilities into AI systems is crucial for enhancing their robustness, explainability, cooperation, and safety.
arXiv Detail & Related papers (2024-11-04T18:10:10Z) - Enabling High-Level Machine Reasoning with Cognitive Neuro-Symbolic
Systems [67.01132165581667]
We propose to enable high-level reasoning in AI systems by integrating cognitive architectures with external neuro-symbolic components.
We illustrate a hybrid framework centered on ACT-R and we discuss the role of generative models in recent and future applications.
arXiv Detail & Related papers (2023-11-13T21:20:17Z) - Advancing Perception in Artificial Intelligence through Principles of
Cognitive Science [6.637438611344584]
We focus on the cognitive functions of perception, which is the process of taking signals from one's surroundings as input, and processing them to understand the environment.
We present a collection of methods in AI for researchers to build AI systems inspired by cognitive science.
arXiv Detail & Related papers (2023-10-13T01:21:55Z) - Brain-Inspired Computational Intelligence via Predictive Coding [89.6335791546526]
Predictive coding (PC) has shown promising performance in machine intelligence tasks.
PC can model information processing in different brain areas, can be used in cognitive control and robotics.
arXiv Detail & Related papers (2023-08-15T16:37:16Z) - Non-equilibrium physics: from spin glasses to machine and neural
learning [0.0]
Disordered many-body systems exhibit a wide range of emergent phenomena across different scales.
We aim to characterize such emergent intelligence in disordered systems through statistical physics.
We uncover relationships between learning mechanisms and physical dynamics that could serve as guiding principles for designing intelligent systems.
arXiv Detail & Related papers (2023-08-03T04:56:47Z) - Incremental procedural and sensorimotor learning in cognitive humanoid
robots [52.77024349608834]
This work presents a cognitive agent that can learn procedures incrementally.
We show the cognitive functions required in each substage and how adding new functions helps address tasks previously unsolved by the agent.
Results show that this approach is capable of solving complex tasks incrementally.
arXiv Detail & Related papers (2023-04-30T22:51:31Z) - Machine Psychology [54.287802134327485]
We argue that a fruitful direction for research is engaging large language models in behavioral experiments inspired by psychology.
We highlight theoretical perspectives, experimental paradigms, and computational analysis techniques that this approach brings to the table.
It paves the way for a "machine psychology" for generative artificial intelligence (AI) that goes beyond performance benchmarks.
arXiv Detail & Related papers (2023-03-24T13:24:41Z) - Intelligent problem-solving as integrated hierarchical reinforcement
learning [11.284287026711125]
Development of complex problem-solving behaviour in biological agents depends on hierarchical cognitive mechanisms.
We propose steps to integrate biologically inspired hierarchical mechanisms to enable advanced problem-solving skills in artificial agents.
We expect our results to guide the development of more sophisticated cognitively inspired hierarchical machine learning architectures.
arXiv Detail & Related papers (2022-08-18T09:28:03Z) - Kernel Based Cognitive Architecture for Autonomous Agents [91.3755431537592]
This paper considers an evolutionary approach to creating a cognitive functionality.
We consider a cognitive architecture which ensures the evolution of the agent on the basis of Symbol Emergence Problem solution.
arXiv Detail & Related papers (2022-07-02T12:41:32Z) - Projection: A Mechanism for Human-like Reasoning in Artificial
Intelligence [6.218613353519724]
Methods of inference exploiting top-down information (from a model) have been shown to be effective for recognising entities in difficult conditions.
Projection is shown to be a key mechanism to solve the problem of applying knowledge to varied or challenging situations.
arXiv Detail & Related papers (2021-03-24T22:33:51Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.