Abstract: The adaptation of pretrained language models to solve supervised tasks has
become a baseline in NLP, and many recent works have focused on studying how
linguistic information is encoded in the pretrained sentence representations.
Among other information, it has been shown that entire syntax trees are
implicitly embedded in the geometry of such models. As these models are often
fine-tuned, it becomes increasingly important to understand how the encoded
knowledge evolves along the fine-tuning. In this paper, we analyze the
evolution of the embedded syntax trees along the fine-tuning process of BERT
for six different tasks, covering all levels of the linguistic structure.
Experimental results show that the encoded syntactic information is forgotten
(PoS tagging), reinforced (dependency and constituency parsing) or preserved
(semantics-related tasks) in different ways along the fine-tuning process
depending on the task.