World Model Agents with Change-Based Intrinsic Motivation
- URL: http://arxiv.org/abs/2503.21047v1
- Date: Wed, 26 Mar 2025 23:40:03 GMT
- Title: World Model Agents with Change-Based Intrinsic Motivation
- Authors: Jeremias Ferrao, Rafael Cunha,
- Abstract summary: Intrinsic motivation and transfer learning have emerged as promising strategies to address this issue.<n>Change Based Exploration Transfer (CBET) has shown potential in addressing sparse feedback but its effectiveness with modern algorithms remains understudied.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Sparse reward environments pose a significant challenge for reinforcement learning due to the scarcity of feedback. Intrinsic motivation and transfer learning have emerged as promising strategies to address this issue. Change Based Exploration Transfer (CBET), a technique that combines these two approaches for model-free algorithms, has shown potential in addressing sparse feedback but its effectiveness with modern algorithms remains understudied. This paper provides an adaptation of CBET for world model algorithms like DreamerV3 and compares the performance of DreamerV3 and IMPALA agents, both with and without CBET, in the sparse reward environments of Crafter and Minigrid. Our tabula rasa results highlight the possibility of CBET improving DreamerV3's returns in Crafter but the algorithm attains a suboptimal policy in Minigrid with CBET further reducing returns. In the same vein, our transfer learning experiments show that pre-training DreamerV3 with intrinsic rewards does not immediately lead to a policy that maximizes extrinsic rewards in Minigrid. Overall, our results suggest that CBET provides a positive impact on DreamerV3 in more complex environments like Crafter but may be detrimental in environments like Minigrid. In the latter case, the behaviours promoted by CBET in DreamerV3 may not align with the task objectives of the environment, leading to reduced returns and suboptimal policies.
Related papers
- InDRiVE: Intrinsic Disagreement based Reinforcement for Vehicle Exploration through Curiosity Driven Generalized World Model [0.0]
In this paper, we propose InDRiVE (Intrinsic Disagreement based Reinforcement for Vehicle Exploration) as a model-based Reinforcement Learning framework.<n>By training an ensemble of world models, the agent actively explores high uncertainty regions of environments without task specific feedback.<n> Experimental results in both seen and unseen environments demonstrate that InDRiVE achieves higher success rates and fewer infractions compared to DreamerV2 and DreamerV3 baselines.
arXiv Detail & Related papers (2025-03-07T16:56:00Z) - DreamerV3 for Traffic Signal Control: Hyperparameter Tuning and Performance [4.962905815955427]
Reinforcement learning (RL) has evolved into a widely investigated technology for the development of smart TSC strategies.<n>The DreamerV3 algorithm presents compelling properties for policy learning.<n>In this paper, a corridor TSC model is trained using the DreamerV3 algorithm to explore the benefits of world models for TSC strategy learning.
arXiv Detail & Related papers (2025-03-04T05:02:46Z) - The Dark Side of Rich Rewards: Understanding and Mitigating Noise in VLM Rewards [31.806143589311652]
Vision-Language Models (VLMs) are increasingly used to generate reward signals for training embodied agents.<n>Our research reveals that agents guided by VLM rewards often underperform compared to those employing only intrinsic rewards.<n>We introduce BiMI, a novel reward function designed to mitigate noise.
arXiv Detail & Related papers (2024-09-24T09:45:20Z) - MuDreamer: Learning Predictive World Models without Reconstruction [58.0159270859475]
We present MuDreamer, a robust reinforcement learning agent that builds upon the DreamerV3 algorithm by learning a predictive world model without the need for reconstructing input signals.
Our method achieves comparable performance on the Atari100k benchmark while benefiting from faster training.
arXiv Detail & Related papers (2024-05-23T22:09:01Z) - Efficient Imitation Learning with Conservative World Models [54.52140201148341]
We tackle the problem of policy learning from expert demonstrations without a reward function.
We re-frame imitation learning as a fine-tuning problem, rather than a pure reinforcement learning one.
arXiv Detail & Related papers (2024-05-21T20:53:18Z) - REBEL: Reward Regularization-Based Approach for Robotic Reinforcement Learning from Human Feedback [61.54791065013767]
A misalignment between the reward function and human preferences can lead to catastrophic outcomes in the real world.<n>Recent methods aim to mitigate misalignment by learning reward functions from human preferences.<n>We propose a novel concept of reward regularization within the robotic RLHF framework.
arXiv Detail & Related papers (2023-12-22T04:56:37Z) - A State Augmentation based approach to Reinforcement Learning from Human
Preferences [20.13307800821161]
Preference Based Reinforcement Learning attempts to solve the issue by utilizing binary feedbacks on queried trajectory pairs.
We present a state augmentation technique that allows the agent's reward model to be robust.
arXiv Detail & Related papers (2023-02-17T07:10:50Z) - PsiPhi-Learning: Reinforcement Learning with Demonstrations using
Successor Features and Inverse Temporal Difference Learning [102.36450942613091]
We propose an inverse reinforcement learning algorithm, called emphinverse temporal difference learning (ITD)
We show how to seamlessly integrate ITD with learning from online environment interactions, arriving at a novel algorithm for reinforcement learning with demonstrations, called $Psi Phi$-learning.
arXiv Detail & Related papers (2021-02-24T21:12:09Z) - Disturbing Reinforcement Learning Agents with Corrupted Rewards [62.997667081978825]
We analyze the effects of different attack strategies based on reward perturbations on reinforcement learning algorithms.
We show that smoothly crafting adversarial rewards are able to mislead the learner, and that using low exploration probability values, the policy learned is more robust to corrupt rewards.
arXiv Detail & Related papers (2021-02-12T15:53:48Z) - Semi-supervised reward learning for offline reinforcement learning [71.6909757718301]
Training agents usually requires reward functions, but rewards are seldom available in practice and their engineering is challenging and laborious.
We propose semi-supervised learning algorithms that learn from limited annotations and incorporate unlabelled data.
In our experiments with a simulated robotic arm, we greatly improve upon behavioural cloning and closely approach the performance achieved with ground truth rewards.
arXiv Detail & Related papers (2020-12-12T20:06:15Z) - Ecological Reinforcement Learning [76.9893572776141]
We study the kinds of environment properties that can make learning under such conditions easier.
understanding how properties of the environment impact the performance of reinforcement learning agents can help us to structure our tasks in ways that make learning tractable.
arXiv Detail & Related papers (2020-06-22T17:55:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.