On the comparability of Pre-trained Language Models
- URL: http://arxiv.org/abs/2001.00781v1
- Date: Fri, 3 Jan 2020 10:53:35 GMT
- Title: On the comparability of Pre-trained Language Models
- Authors: Matthias A{\ss}enmacher, Christian Heumann
- Abstract summary: Recent developments in unsupervised representation learning have successfully established the concept of transfer learning in NLP.
More elaborated architectures are making better use of contextual information.
Larger corpora are used as resources for pre-training large language models in a self-supervised fashion.
Advances in parallel computing as well as in cloud computing made it possible to train these models with growing capacities in the same or even in shorter time than previously established models.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recent developments in unsupervised representation learning have successfully
established the concept of transfer learning in NLP. Mainly three forces are
driving the improvements in this area of research: More elaborated
architectures are making better use of contextual information. Instead of
simply plugging in static pre-trained representations, these are learned based
on surrounding context in end-to-end trainable models with more intelligently
designed language modelling objectives. Along with this, larger corpora are
used as resources for pre-training large language models in a self-supervised
fashion which are afterwards fine-tuned on supervised tasks. Advances in
parallel computing as well as in cloud computing, made it possible to train
these models with growing capacities in the same or even in shorter time than
previously established models. These three developments agglomerate in new
state-of-the-art (SOTA) results being revealed in a higher and higher
frequency. It is not always obvious where these improvements originate from, as
it is not possible to completely disentangle the contributions of the three
driving forces. We set ourselves to providing a clear and concise overview on
several large pre-trained language models, which achieved SOTA results in the
last two years, with respect to their use of new architectures and resources.
We want to clarify for the reader where the differences between the models are
and we furthermore attempt to gain some insight into the single contributions
of lexical/computational improvements as well as of architectural changes. We
explicitly do not intend to quantify these contributions, but rather see our
work as an overview in order to identify potential starting points for
benchmark comparisons. Furthermore, we tentatively want to point at potential
possibilities for improvement in the field of open-sourcing and reproducible
research.
Related papers
Err
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.