Broad-persistent Advice for Interactive Reinforcement Learning Scenarios
- URL: http://arxiv.org/abs/2210.05187v1
- Date: Tue, 11 Oct 2022 06:46:27 GMT
- Title: Broad-persistent Advice for Interactive Reinforcement Learning Scenarios
- Authors: Francisco Cruz, Adam Bignold, Hung Son Nguyen, Richard Dazeley, Peter
Vamplew
- Abstract summary: We present a method for retaining and reusing provided knowledge, allowing trainers to give general advice relevant to more than just the current state.
Results obtained show that the use of broad-persistent advice substantially improves the performance of the agent.
- Score: 2.0549239024359762
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The use of interactive advice in reinforcement learning scenarios allows for
speeding up the learning process for autonomous agents. Current interactive
reinforcement learning research has been limited to real-time interactions that
offer relevant user advice to the current state only. Moreover, the information
provided by each interaction is not retained and instead discarded by the agent
after a single use. In this paper, we present a method for retaining and
reusing provided knowledge, allowing trainers to give general advice relevant
to more than just the current state. Results obtained show that the use of
broad-persistent advice substantially improves the performance of the agent
while reducing the number of interactions required for the trainer.
Related papers
- Teachable Reinforcement Learning via Advice Distillation [161.43457947665073]
We propose a new supervision paradigm for interactive learning based on "teachable" decision-making systems that learn from structured advice provided by an external teacher.
We show that agents that learn from advice can acquire new skills with significantly less human supervision than standard reinforcement learning algorithms.
arXiv Detail & Related papers (2022-03-19T03:22:57Z) - Continual Prompt Tuning for Dialog State Tracking [58.66412648276873]
A desirable dialog system should be able to continually learn new skills without forgetting old ones.
We present Continual Prompt Tuning, a parameter-efficient framework that not only avoids forgetting but also enables knowledge transfer between tasks.
arXiv Detail & Related papers (2022-03-13T13:22:41Z) - A Broad-persistent Advising Approach for Deep Interactive Reinforcement
Learning in Robotic Environments [0.3683202928838613]
Deep Interactive Reinforcement Learning (DeepIRL) includes interactive feedback from an external trainer or expert giving advice to help learners choosing actions to speed up the learning process.
In this paper, we present Broad-persistent Advising (BPA), a broad-persistent advising approach that retains and reuses the processed information.
It not only helps trainers to give more general advice relevant to similar states instead of only the current state but also allows the agent to speed up the learning process.
arXiv Detail & Related papers (2021-10-15T10:56:00Z) - PEBBLE: Feedback-Efficient Interactive Reinforcement Learning via
Relabeling Experience and Unsupervised Pre-training [94.87393610927812]
We present an off-policy, interactive reinforcement learning algorithm that capitalizes on the strengths of both feedback and off-policy learning.
We demonstrate that our approach is capable of learning tasks of higher complexity than previously considered by human-in-the-loop methods.
arXiv Detail & Related papers (2021-06-09T14:10:50Z) - Persistent Rule-based Interactive Reinforcement Learning [0.5999777817331317]
Current interactive reinforcement learning research has been limited to interactions that offer relevant advice to the current state only.
We propose a persistent rule-based interactive reinforcement learning approach, i.e., a method for retaining and reusing provided knowledge.
Our experimental results show persistent advice substantially improves the performance of the agent while reducing the number of interactions required for the trainer.
arXiv Detail & Related papers (2021-02-04T06:48:57Z) - Generative Inverse Deep Reinforcement Learning for Online Recommendation [62.09946317831129]
We propose a novel inverse reinforcement learning approach, namely InvRec, for online recommendation.
InvRec extracts the reward function from user's behaviors automatically, for online recommendation.
arXiv Detail & Related papers (2020-11-04T12:12:25Z) - Human Engagement Providing Evaluative and Informative Advice for
Interactive Reinforcement Learning [2.5799044614524664]
This work focuses on answering which of two approaches, evaluative or informative, is the preferred instructional approach for humans.
Results show users giving informative advice provide more accurate advice, are willing to assist the learner agent for a longer time, and provide more advice per episode.
arXiv Detail & Related papers (2020-09-21T02:14:02Z) - A Conceptual Framework for Externally-influenced Agents: An Assisted
Reinforcement Learning Review [10.73121872355072]
We propose a conceptual framework and taxonomy for assisted reinforcement learning.
The proposed taxonomy details the relationship between the external information source and the learner agent.
We identify current streams of reinforcement learning that use external information to improve the agent's performance.
arXiv Detail & Related papers (2020-07-03T08:07:31Z) - Knowledge-guided Deep Reinforcement Learning for Interactive
Recommendation [49.32287384774351]
Interactive recommendation aims to learn from dynamic interactions between items and users to achieve responsiveness and accuracy.
We propose Knowledge-Guided deep Reinforcement learning to harness the advantages of both reinforcement learning and knowledge graphs for interactive recommendation.
arXiv Detail & Related papers (2020-04-17T05:26:47Z) - Transfer Heterogeneous Knowledge Among Peer-to-Peer Teammates: A Model
Distillation Approach [55.83558520598304]
We propose a brand new solution to reuse experiences and transfer value functions among multiple students via model distillation.
We also describe how to design an efficient communication protocol to exploit heterogeneous knowledge.
Our proposed framework, namely Learning and Teaching Categorical Reinforcement, shows promising performance on stabilizing and accelerating learning progress.
arXiv Detail & Related papers (2020-02-06T11:31:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.