Transferring Domain Knowledge with an Adviser in Continuous Tasks
- URL: http://arxiv.org/abs/2102.08029v1
- Date: Tue, 16 Feb 2021 09:03:33 GMT
- Title: Transferring Domain Knowledge with an Adviser in Continuous Tasks
- Authors: Rukshan Wijesinghe, Kasun Vithanage, Dumindu Tissera, Alex Xavier,
Subha Fernando and Jayathu Samarawickrama
- Abstract summary: Reinforcement learning techniques are incapable of explicitly incorporating domain-specific knowledge into the learning process.
We adapt the Deep Deterministic Policy Gradient (DDPG) algorithm to incorporate an adviser.
Our experiments on OpenAi Gym benchmark tasks show that integrating domain knowledge through advisers expedites the learning and improves the policy towards better optima.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recent advances in Reinforcement Learning (RL) have surpassed human-level
performance in many simulated environments. However, existing reinforcement
learning techniques are incapable of explicitly incorporating already known
domain-specific knowledge into the learning process. Therefore, the agents have
to explore and learn the domain knowledge independently through a trial and
error approach, which consumes both time and resources to make valid responses.
Hence, we adapt the Deep Deterministic Policy Gradient (DDPG) algorithm to
incorporate an adviser, which allows integrating domain knowledge in the form
of pre-learned policies or pre-defined relationships to enhance the agent's
learning process. Our experiments on OpenAi Gym benchmark tasks show that
integrating domain knowledge through advisers expedites the learning and
improves the policy towards better optima.
Related papers
- From Novice to Expert: LLM Agent Policy Optimization via Step-wise Reinforcement Learning [62.54484062185869]
We introduce StepAgent, which utilizes step-wise reward to optimize the agent's reinforcement learning process.
We propose implicit-reward and inverse reinforcement learning techniques to facilitate agent reflection and policy adjustment.
arXiv Detail & Related papers (2024-11-06T10:35:11Z) - Leveraging Multi-AI Agents for Cross-Domain Knowledge Discovery [0.0]
This study introduces a novel approach to cross-domain knowledge discovery through the deployment of multi-AI agents.
Our findings demonstrate the superior capability of domain specific multi-AI agent system in identifying and bridging knowledge gaps.
arXiv Detail & Related papers (2024-04-12T14:50:41Z) - RLIF: Interactive Imitation Learning as Reinforcement Learning [56.997263135104504]
We show how off-policy reinforcement learning can enable improved performance under assumptions that are similar but potentially even more practical than those of interactive imitation learning.
Our proposed method uses reinforcement learning with user intervention signals themselves as rewards.
This relaxes the assumption that intervening experts in interactive imitation learning should be near-optimal and enables the algorithm to learn behaviors that improve over the potential suboptimal human expert.
arXiv Detail & Related papers (2023-11-21T21:05:21Z) - Inapplicable Actions Learning for Knowledge Transfer in Reinforcement
Learning [3.194414753332705]
We show that learning inapplicable actions greatly improves the sample efficiency of RL algorithms.
Thanks to the transferability of the knowledge acquired, it can be reused in other tasks and domains to make the learning process more efficient.
arXiv Detail & Related papers (2022-11-28T17:45:39Z) - Rethinking Learning Dynamics in RL using Adversarial Networks [79.56118674435844]
We present a learning mechanism for reinforcement learning of closely related skills parameterized via a skill embedding space.
The main contribution of our work is to formulate an adversarial training regime for reinforcement learning with the help of entropy-regularized policy gradient formulation.
arXiv Detail & Related papers (2022-01-27T19:51:09Z) - Multi-Agent Advisor Q-Learning [18.8931184962221]
We provide a principled framework for incorporating action recommendations from online sub-optimal advisors in multi-agent settings.
We present two novel Q-learning based algorithms: ADMIRAL - Decision Making (ADMIRAL-DM) and ADMIRAL - Advisor Evaluation (ADMIRAL-AE)
We analyze the algorithms theoretically and provide fixed-point guarantees regarding their learning in general-sum games.
arXiv Detail & Related papers (2021-10-26T00:21:15Z) - A Broad-persistent Advising Approach for Deep Interactive Reinforcement
Learning in Robotic Environments [0.3683202928838613]
Deep Interactive Reinforcement Learning (DeepIRL) includes interactive feedback from an external trainer or expert giving advice to help learners choosing actions to speed up the learning process.
In this paper, we present Broad-persistent Advising (BPA), a broad-persistent advising approach that retains and reuses the processed information.
It not only helps trainers to give more general advice relevant to similar states instead of only the current state but also allows the agent to speed up the learning process.
arXiv Detail & Related papers (2021-10-15T10:56:00Z) - Decision Rule Elicitation for Domain Adaptation [93.02675868486932]
Human-in-the-loop machine learning is widely used in artificial intelligence (AI) to elicit labels from experts.
In this work, we allow experts to additionally produce decision rules describing their decision-making.
We show that decision rule elicitation improves domain adaptation of the algorithm and helps to propagate expert's knowledge to the AI model.
arXiv Detail & Related papers (2021-02-23T08:07:22Z) - KnowledgeCheckR: Intelligent Techniques for Counteracting Forgetting [52.623349754076024]
We provide an overview of the recommendation approaches integrated in KnowledgeCheckR.
Examples thereof are utility-based recommendation that helps to identify learning contents to be repeated in the future, collaborative filtering approaches that help to implement session-based recommendation, and content-based recommendation that supports intelligent question answering.
arXiv Detail & Related papers (2021-02-15T20:06:28Z) - Knowledge-guided Deep Reinforcement Learning for Interactive
Recommendation [49.32287384774351]
Interactive recommendation aims to learn from dynamic interactions between items and users to achieve responsiveness and accuracy.
We propose Knowledge-Guided deep Reinforcement learning to harness the advantages of both reinforcement learning and knowledge graphs for interactive recommendation.
arXiv Detail & Related papers (2020-04-17T05:26:47Z) - KoGuN: Accelerating Deep Reinforcement Learning via Integrating Human
Suboptimal Knowledge [40.343858932413376]
We propose knowledge guided policy network (KoGuN), a novel framework that combines human prior suboptimal knowledge with reinforcement learning.
Our framework consists of a fuzzy rule controller to represent human knowledge and a refine module to fine-tune suboptimal prior knowledge.
arXiv Detail & Related papers (2020-02-18T07:58:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.