Learning state correspondence of reinforcement learning tasks for
knowledge transfer
- URL: http://arxiv.org/abs/2209.06604v1
- Date: Wed, 14 Sep 2022 12:42:59 GMT
- Title: Learning state correspondence of reinforcement learning tasks for
knowledge transfer
- Authors: Marko Ruman and Tatiana V. Guy
- Abstract summary: Generalizing and reusing knowledge are the fundamental requirements for creating a truly intelligent agent.
This work proposes a general method for one-to-one transfer learning based on generative adversarial network model tailored to RL task.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Deep reinforcement learning has shown an ability to achieve super-human
performance in solving complex reinforcement learning (RL) tasks only from
raw-pixels. However, it fails to reuse knowledge from previously learnt tasks
to solve new, unseen ones. Generalizing and reusing knowledge are the
fundamental requirements for creating a truly intelligent agent. This work
proposes a general method for one-to-one transfer learning based on generative
adversarial network model tailored to RL task.
Related papers
- Knowledge capture, adaptation and composition (KCAC): A framework for cross-task curriculum learning in robotic manipulation [6.683222869973898]
Reinforcement learning (RL) has demonstrated remarkable potential in robotic manipulation but faces challenges in sample inefficiency and lack of interpretability.<n>This paper proposes a Knowledge Capture, Adaptation, and Composition framework to integrate knowledge transfer into RL through cross-task curriculum learning.<n>As a result, our KCAC approach achieves a 40 percent reduction in training time while improving task success rates by 10 percent compared to traditional RL methods.
arXiv Detail & Related papers (2025-05-15T17:30:29Z) - Anti-Retroactive Interference for Lifelong Learning [65.50683752919089]
We design a paradigm for lifelong learning based on meta-learning and associative mechanism of the brain.
It tackles the problem from two aspects: extracting knowledge and memorizing knowledge.
It is theoretically analyzed that the proposed learning paradigm can make the models of different tasks converge to the same optimum.
arXiv Detail & Related papers (2022-08-27T09:27:36Z) - Learning with Recoverable Forgetting [77.56338597012927]
Learning wIth Recoverable Forgetting explicitly handles the task- or sample-specific knowledge removal and recovery.
Specifically, LIRF brings in two innovative schemes, namely knowledge deposit and withdrawal.
We conduct experiments on several datasets, and demonstrate that the proposed LIRF strategy yields encouraging results with gratifying generalization capability.
arXiv Detail & Related papers (2022-07-17T16:42:31Z) - Factorizing Knowledge in Neural Networks [65.57381498391202]
We propose a novel knowledge-transfer task, Knowledge Factorization(KF)
KF aims to decompose it into several factor networks, each of which handles only a dedicated task and maintains task-specific knowledge factorized from the source network.
We introduce an information-theoretic objective, InfoMax-Bottleneck(IMB), to carry out KF by optimizing the mutual information between the learned representations and input.
arXiv Detail & Related papers (2022-07-04T09:56:49Z) - Learning Dynamics and Generalization in Reinforcement Learning [59.530058000689884]
We show theoretically that temporal difference learning encourages agents to fit non-smooth components of the value function early in training.
We show that neural networks trained using temporal difference algorithms on dense reward tasks exhibit weaker generalization between states than randomly networks and gradient networks trained with policy methods.
arXiv Detail & Related papers (2022-06-05T08:49:16Z) - Multi-Source Transfer Learning for Deep Model-Based Reinforcement
Learning [0.6445605125467572]
A crucial challenge in reinforcement learning is to reduce the number of interactions with the environment that an agent requires to master a given task.
Transfer learning proposes to address this issue by re-using knowledge from previously learned tasks.
The goal of this paper is to address these issues with modular multi-source transfer learning techniques.
arXiv Detail & Related papers (2022-05-28T12:04:52Z) - Hierarchical Self-supervised Augmented Knowledge Distillation [1.9355744690301404]
We propose an alternative self-supervised augmented task to guide the network to learn the joint distribution of the original recognition task and self-supervised auxiliary task.
It is demonstrated as a richer knowledge to improve the representation power without losing the normal classification capability.
Our method significantly surpasses the previous SOTA SSKD with an average improvement of 2.56% on CIFAR-100 and an improvement of 0.77% on ImageNet.
arXiv Detail & Related papers (2021-07-29T02:57:21Z) - Split-and-Bridge: Adaptable Class Incremental Learning within a Single
Neural Network [0.20305676256390928]
Continual learning is a major problem in the deep learning community.
In this paper, we propose a novel continual learning method, called Split-and-Bridge.
arXiv Detail & Related papers (2021-07-03T05:51:53Z) - Incremental Embedding Learning via Zero-Shot Translation [65.94349068508863]
Current state-of-the-art incremental learning methods tackle catastrophic forgetting problem in traditional classification networks.
We propose a novel class-incremental method for embedding network, named as zero-shot translation class-incremental method (ZSTCI)
In addition, ZSTCI can easily be combined with existing regularization-based incremental learning methods to further improve performance of embedding networks.
arXiv Detail & Related papers (2020-12-31T08:21:37Z) - A Combinatorial Perspective on Transfer Learning [27.7848044115664]
We study how the learning of modular solutions can allow for effective generalization to both unseen and potentially differently distributed data.
Our main postulate is that the combination of task segmentation, modular learning and memory-based ensembling can give rise to generalization on an exponentially growing number of unseen tasks.
arXiv Detail & Related papers (2020-10-23T09:53:31Z) - Towards Accurate Knowledge Transfer via Target-awareness Representation
Disentanglement [56.40587594647692]
We propose a novel transfer learning algorithm, introducing the idea of Target-awareness REpresentation Disentanglement (TRED)
TRED disentangles the relevant knowledge with respect to the target task from the original source model and used as a regularizer during fine-tuning the target model.
Experiments on various real world datasets show that our method stably improves the standard fine-tuning by more than 2% in average.
arXiv Detail & Related papers (2020-10-16T17:45:08Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.