Abstract: Natural language processing (NLP) tasks (e.g. question-answering in English)
benefit from knowledge of other tasks (e.g. named entity recognition in
English) and knowledge of other languages (e.g. question-answering in Spanish).
Such shared representations are typically learned in isolation, either across
tasks or across languages. In this work, we propose a meta-learning approach to
learn the interactions between both tasks and languages. We also investigate
the role of different sampling strategies used during meta-learning. We present
experiments on five different tasks and six different languages from the XTREME
multilingual benchmark dataset. Our meta-learned model clearly improves in
performance compared to competitive baseline models that also include
multi-task baselines. We also present zero-shot evaluations on unseen target
languages to demonstrate the utility of our proposed model.