lamBERT: Language and Action Learning Using Multimodal BERT
- URL: http://arxiv.org/abs/2004.07093v1
- Date: Wed, 15 Apr 2020 13:54:55 GMT
- Title: lamBERT: Language and Action Learning Using Multimodal BERT
- Authors: Kazuki Miyazawa, Tatsuya Aoki, Takato Horii, and Takayuki Nagai
- Abstract summary: This study proposes the language and action learning using multimodal BERT (lamBERT) model.
Experiment is conducted in a grid environment that requires language understanding for the agent to act properly.
The lamBERT model obtained higher rewards in multitask settings and transfer settings when compared to other models.
- Score: 0.1942428068361014
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recently, the bidirectional encoder representations from transformers (BERT)
model has attracted much attention in the field of natural language processing,
owing to its high performance in language understanding-related tasks. The BERT
model learns language representation that can be adapted to various tasks via
pre-training using a large corpus in an unsupervised manner. This study
proposes the language and action learning using multimodal BERT (lamBERT) model
that enables the learning of language and actions by 1) extending the BERT
model to multimodal representation and 2) integrating it with reinforcement
learning. To verify the proposed model, an experiment is conducted in a grid
environment that requires language understanding for the agent to act properly.
As a result, the lamBERT model obtained higher rewards in multitask settings
and transfer settings when compared to other models, such as the convolutional
neural network-based model and the lamBERT model without pre-training.
Related papers
Err
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.