What Do Position Embeddings Learn? An Empirical Study of Pre-Trained
Language Model Positional Encoding
- URL: http://arxiv.org/abs/2010.04903v1
- Date: Sat, 10 Oct 2020 05:03:14 GMT
- Title: What Do Position Embeddings Learn? An Empirical Study of Pre-Trained
Language Model Positional Encoding
- Authors: Yu-An Wang, Yun-Nung Chen
- Abstract summary: This paper focuses on providing a new insight of pre-trained position embeddings through feature-level analysis and empirical experiments on most of iconic NLP tasks.
It is believed that our experimental results can guide the future work to choose the suitable positional encoding function for specific tasks given the application property.
- Score: 42.011175069706816
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In recent years, pre-trained Transformers have dominated the majority of NLP
benchmark tasks. Many variants of pre-trained Transformers have kept breaking
out, and most focus on designing different pre-training objectives or variants
of self-attention. Embedding the position information in the self-attention
mechanism is also an indispensable factor in Transformers however is often
discussed at will. Therefore, this paper carries out an empirical study on
position embeddings of mainstream pre-trained Transformers, which mainly
focuses on two questions: 1) Do position embeddings really learn the meaning of
positions? 2) How do these different learned position embeddings affect
Transformers for NLP tasks? This paper focuses on providing a new insight of
pre-trained position embeddings through feature-level analysis and empirical
experiments on most of iconic NLP tasks. It is believed that our experimental
results can guide the future work to choose the suitable positional encoding
function for specific tasks given the application property.
Related papers
Err
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.