Persistent Rule-based Interactive Reinforcement Learning
- URL: http://arxiv.org/abs/2102.02441v1
- Date: Thu, 4 Feb 2021 06:48:57 GMT
- Title: Persistent Rule-based Interactive Reinforcement Learning
- Authors: Adam Bignold and Francisco Cruz and Richard Dazeley and Peter Vamplew
and Cameron Foale
- Abstract summary: Current interactive reinforcement learning research has been limited to interactions that offer relevant advice to the current state only.
We propose a persistent rule-based interactive reinforcement learning approach, i.e., a method for retaining and reusing provided knowledge.
Our experimental results show persistent advice substantially improves the performance of the agent while reducing the number of interactions required for the trainer.
- Score: 0.5999777817331317
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Interactive reinforcement learning has allowed speeding up the learning
process in autonomous agents by including a human trainer providing extra
information to the agent in real-time. Current interactive reinforcement
learning research has been limited to interactions that offer relevant advice
to the current state only. Additionally, the information provided by each
interaction is not retained and instead discarded by the agent after a
single-use. In this work, we propose a persistent rule-based interactive
reinforcement learning approach, i.e., a method for retaining and reusing
provided knowledge, allowing trainers to give general advice relevant to more
than just the current state. Our experimental results show persistent advice
substantially improves the performance of the agent while reducing the number
of interactions required for the trainer. Moreover, rule-based advice shows
similar performance impact as state-based advice, but with a substantially
reduced interaction count.
Related papers
- Multi-agent cooperation through learning-aware policy gradients [53.63948041506278]
Self-interested individuals often fail to cooperate, posing a fundamental challenge for multi-agent learning.
We present the first unbiased, higher-derivative-free policy gradient algorithm for learning-aware reinforcement learning.
We derive from the iterated prisoner's dilemma a novel explanation for how and when cooperation arises among self-interested learning-aware agents.
arXiv Detail & Related papers (2024-10-24T10:48:42Z) - RLIF: Interactive Imitation Learning as Reinforcement Learning [56.997263135104504]
We show how off-policy reinforcement learning can enable improved performance under assumptions that are similar but potentially even more practical than those of interactive imitation learning.
Our proposed method uses reinforcement learning with user intervention signals themselves as rewards.
This relaxes the assumption that intervening experts in interactive imitation learning should be near-optimal and enables the algorithm to learn behaviors that improve over the potential suboptimal human expert.
arXiv Detail & Related papers (2023-11-21T21:05:21Z) - Multi-trainer Interactive Reinforcement Learning System [7.3072544716528345]
We propose a more effective interactive reinforcement learning system by introducing multiple trainers.
In particular, our trainer feedback aggregation experiments show that our aggregation method has the best accuracy.
Finally, we conduct a grid-world experiment to show that the policy trained by the MTIRL with the review model is closer to the optimal policy than that without a review model.
arXiv Detail & Related papers (2022-10-14T18:32:59Z) - Broad-persistent Advice for Interactive Reinforcement Learning Scenarios [2.0549239024359762]
We present a method for retaining and reusing provided knowledge, allowing trainers to give general advice relevant to more than just the current state.
Results obtained show that the use of broad-persistent advice substantially improves the performance of the agent.
arXiv Detail & Related papers (2022-10-11T06:46:27Z) - Teachable Reinforcement Learning via Advice Distillation [161.43457947665073]
We propose a new supervision paradigm for interactive learning based on "teachable" decision-making systems that learn from structured advice provided by an external teacher.
We show that agents that learn from advice can acquire new skills with significantly less human supervision than standard reinforcement learning algorithms.
arXiv Detail & Related papers (2022-03-19T03:22:57Z) - A Broad-persistent Advising Approach for Deep Interactive Reinforcement
Learning in Robotic Environments [0.3683202928838613]
Deep Interactive Reinforcement Learning (DeepIRL) includes interactive feedback from an external trainer or expert giving advice to help learners choosing actions to speed up the learning process.
In this paper, we present Broad-persistent Advising (BPA), a broad-persistent advising approach that retains and reuses the processed information.
It not only helps trainers to give more general advice relevant to similar states instead of only the current state but also allows the agent to speed up the learning process.
arXiv Detail & Related papers (2021-10-15T10:56:00Z) - PEBBLE: Feedback-Efficient Interactive Reinforcement Learning via
Relabeling Experience and Unsupervised Pre-training [94.87393610927812]
We present an off-policy, interactive reinforcement learning algorithm that capitalizes on the strengths of both feedback and off-policy learning.
We demonstrate that our approach is capable of learning tasks of higher complexity than previously considered by human-in-the-loop methods.
arXiv Detail & Related papers (2021-06-09T14:10:50Z) - Human Engagement Providing Evaluative and Informative Advice for
Interactive Reinforcement Learning [2.5799044614524664]
This work focuses on answering which of two approaches, evaluative or informative, is the preferred instructional approach for humans.
Results show users giving informative advice provide more accurate advice, are willing to assist the learner agent for a longer time, and provide more advice per episode.
arXiv Detail & Related papers (2020-09-21T02:14:02Z) - A Conceptual Framework for Externally-influenced Agents: An Assisted
Reinforcement Learning Review [10.73121872355072]
We propose a conceptual framework and taxonomy for assisted reinforcement learning.
The proposed taxonomy details the relationship between the external information source and the learner agent.
We identify current streams of reinforcement learning that use external information to improve the agent's performance.
arXiv Detail & Related papers (2020-07-03T08:07:31Z) - Knowledge-guided Deep Reinforcement Learning for Interactive
Recommendation [49.32287384774351]
Interactive recommendation aims to learn from dynamic interactions between items and users to achieve responsiveness and accuracy.
We propose Knowledge-Guided deep Reinforcement learning to harness the advantages of both reinforcement learning and knowledge graphs for interactive recommendation.
arXiv Detail & Related papers (2020-04-17T05:26:47Z) - Transfer Heterogeneous Knowledge Among Peer-to-Peer Teammates: A Model
Distillation Approach [55.83558520598304]
We propose a brand new solution to reuse experiences and transfer value functions among multiple students via model distillation.
We also describe how to design an efficient communication protocol to exploit heterogeneous knowledge.
Our proposed framework, namely Learning and Teaching Categorical Reinforcement, shows promising performance on stabilizing and accelerating learning progress.
arXiv Detail & Related papers (2020-02-06T11:31:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.