An information-theoretic perspective on intrinsic motivation in
reinforcement learning: a survey
- URL: http://arxiv.org/abs/2209.08890v1
- Date: Mon, 19 Sep 2022 09:47:43 GMT
- Title: An information-theoretic perspective on intrinsic motivation in
reinforcement learning: a survey
- Authors: Arthur Aubret, Laetitia Matignon, Salima Hassas
- Abstract summary: We propose to survey these research works through a new taxonomy based on information theory.
We computationally revisit the notions of surprise, novelty and skill learning.
Our analysis suggests that novelty and surprise can assist the building of a hierarchy of transferable skills.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The reinforcement learning (RL) research area is very active, with an
important number of new contributions; especially considering the emergent
field of deep RL (DRL). However a number of scientific and technical challenges
still need to be resolved, amongst which we can mention the ability to abstract
actions or the difficulty to explore the environment in sparse-reward settings
which can be addressed by intrinsic motivation (IM). We propose to survey these
research works through a new taxonomy based on information theory: we
computationally revisit the notions of surprise, novelty and skill learning.
This allows us to identify advantages and disadvantages of methods and exhibit
current outlooks of research. Our analysis suggests that novelty and surprise
can assist the building of a hierarchy of transferable skills that further
abstracts the environment and makes the exploration process more robust.
Related papers
- Fostering Intrinsic Motivation in Reinforcement Learning with Pretrained Foundation Models [8.255197802529118]
Recent rise of foundation models, such as CLIP, offers opportunity to leverage pretrained, semantically rich embeddings.
Introductory modules can effectively utilize full state information, significantly increasing sample efficiency.
We show that embeddings provided by foundation models are sometimes even better than those constructed by the agent during training.
arXiv Detail & Related papers (2024-10-09T20:05:45Z) - Navigate through Enigmatic Labyrinth A Survey of Chain of Thought Reasoning: Advances, Frontiers and Future [38.1992191907012]
Chain-of-thought prompting significantly enhances LLM's reasoning capabilities.
This paper systematically investigates relevant research, summarizing advanced methods.
We also delve into the current frontiers and delineate the challenges and future directions.
arXiv Detail & Related papers (2023-09-27T04:53:10Z) - A Comprehensive Survey of Forgetting in Deep Learning Beyond Continual Learning [58.107474025048866]
Forgetting refers to the loss or deterioration of previously acquired knowledge.
Forgetting is a prevalent phenomenon observed in various other research domains within deep learning.
arXiv Detail & Related papers (2023-07-16T16:27:58Z) - A Diachronic Analysis of Paradigm Shifts in NLP Research: When, How, and
Why? [84.46288849132634]
We propose a systematic framework for analyzing the evolution of research topics in a scientific field using causal discovery and inference techniques.
We define three variables to encompass diverse facets of the evolution of research topics within NLP.
We utilize a causal discovery algorithm to unveil the causal connections among these variables using observational data.
arXiv Detail & Related papers (2023-05-22T11:08:00Z) - Knowledge-enhanced Neural Machine Reasoning: A Review [67.51157900655207]
We introduce a novel taxonomy that categorizes existing knowledge-enhanced methods into two primary categories and four subcategories.
We elucidate the current application domains and provide insight into promising prospects for future research.
arXiv Detail & Related papers (2023-02-04T04:54:30Z) - From Psychological Curiosity to Artificial Curiosity: Curiosity-Driven
Learning in Artificial Intelligence Tasks [56.20123080771364]
Psychological curiosity plays a significant role in human intelligence to enhance learning through exploration and information acquisition.
In the Artificial Intelligence (AI) community, artificial curiosity provides a natural intrinsic motivation for efficient learning.
CDL has become increasingly popular, where agents are self-motivated to learn novel knowledge.
arXiv Detail & Related papers (2022-01-20T17:07:03Z) - Explainability in Deep Reinforcement Learning [68.8204255655161]
We review recent works in the direction to attain Explainable Reinforcement Learning (XRL)
In critical situations where it is essential to justify and explain the agent's behaviour, better explainability and interpretability of RL models could help gain scientific insight on the inner workings of what is still considered a black box.
arXiv Detail & Related papers (2020-08-15T10:11:42Z) - Deep Reinforcement Learning and its Neuroscientific Implications [19.478332877763417]
The emergence of powerful artificial intelligence is defining new research directions in neuroscience.
Deep reinforcement learning (Deep RL) offers a framework for studying the interplay among learning, representation and decision-making.
Deep RL offers a new set of research tools and a wide range of novel hypotheses.
arXiv Detail & Related papers (2020-07-07T19:27:54Z) - Deep Learning for Sensor-based Human Activity Recognition: Overview,
Challenges and Opportunities [52.59080024266596]
We present a survey of the state-of-the-art deep learning methods for sensor-based human activity recognition.
We first introduce the multi-modality of the sensory data and provide information for public datasets.
We then propose a new taxonomy to structure the deep methods by challenges.
arXiv Detail & Related papers (2020-01-21T09:55:59Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.