On the Evolution of Syntactic Information Encoded by BERT's
Contextualized Representations
- URL: http://arxiv.org/abs/2101.11492v1
- Date: Wed, 27 Jan 2021 15:41:09 GMT
- Title: On the Evolution of Syntactic Information Encoded by BERT's
Contextualized Representations
- Authors: Laura Perez-Mayos, Roberto Carlini, Miguel Ballesteros, Leo Wanner
- Abstract summary: In this paper, we analyze the evolution of the embedded syntax trees along the fine-tuning process of BERT for six different tasks.
Experimental results show that the encoded information is forgotten (PoS tagging), reinforced (dependency and constituency parsing) or preserved (semantics-related tasks) in different ways along the fine-tuning process depending on the task.
- Score: 11.558645364193486
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The adaptation of pretrained language models to solve supervised tasks has
become a baseline in NLP, and many recent works have focused on studying how
linguistic information is encoded in the pretrained sentence representations.
Among other information, it has been shown that entire syntax trees are
implicitly embedded in the geometry of such models. As these models are often
fine-tuned, it becomes increasingly important to understand how the encoded
knowledge evolves along the fine-tuning. In this paper, we analyze the
evolution of the embedded syntax trees along the fine-tuning process of BERT
for six different tasks, covering all levels of the linguistic structure.
Experimental results show that the encoded syntactic information is forgotten
(PoS tagging), reinforced (dependency and constituency parsing) or preserved
(semantics-related tasks) in different ways along the fine-tuning process
depending on the task.
Related papers
- UniPSDA: Unsupervised Pseudo Semantic Data Augmentation for Zero-Shot Cross-Lingual Natural Language Understanding [31.272603877215733]
Cross-lingual representation learning transfers knowledge from resource-rich data to resource-scarce ones to improve the semantic understanding abilities of different languages.
We propose an Unsupervised Pseudo Semantic Data Augmentation (UniPSDA) mechanism for cross-lingual natural language understanding to enrich the training data without human interventions.
arXiv Detail & Related papers (2024-06-24T07:27:01Z) - Injecting linguistic knowledge into BERT for Dialogue State Tracking [60.42231674887294]
This paper proposes a method that extracts linguistic knowledge via an unsupervised framework.
We then utilize this knowledge to augment BERT's performance and interpretability in Dialogue State Tracking (DST) tasks.
We benchmark this framework on various DST tasks and observe a notable improvement in accuracy.
arXiv Detail & Related papers (2023-11-27T08:38:42Z) - Prompting Language Models for Linguistic Structure [73.11488464916668]
We present a structured prompting approach for linguistic structured prediction tasks.
We evaluate this approach on part-of-speech tagging, named entity recognition, and sentence chunking.
We find that while PLMs contain significant prior knowledge of task labels due to task leakage into the pretraining corpus, structured prompting can also retrieve linguistic structure with arbitrary labels.
arXiv Detail & Related papers (2022-11-15T01:13:39Z) - Unified BERT for Few-shot Natural Language Understanding [7.352338840651369]
We propose UBERT, a unified bidirectional language understanding model based on BERT framework.
UBERT encodes prior knowledge from various aspects, uniformly constructing learning representations across multiple NLU tasks.
Experiments show that UBERT achieves the state-of-the-art performance on 7 NLU tasks, 14 datasets on few-shot and zero-shot setting.
arXiv Detail & Related papers (2022-06-24T06:10:53Z) - A Latent-Variable Model for Intrinsic Probing [93.62808331764072]
We propose a novel latent-variable formulation for constructing intrinsic probes.
We find empirical evidence that pre-trained representations develop a cross-lingually entangled notion of morphosyntax.
arXiv Detail & Related papers (2022-01-20T15:01:12Z) - KELM: Knowledge Enhanced Pre-Trained Language Representations with
Message Passing on Hierarchical Relational Graphs [26.557447199727758]
We propose a novel knowledge-aware language model framework based on fine-tuning process.
Our model can efficiently incorporate world knowledge from KGs into existing language models such as BERT.
arXiv Detail & Related papers (2021-09-09T12:39:17Z) - ERICA: Improving Entity and Relation Understanding for Pre-trained
Language Models via Contrastive Learning [97.10875695679499]
We propose a novel contrastive learning framework named ERICA in pre-training phase to obtain a deeper understanding of the entities and their relations in text.
Experimental results demonstrate that our proposed ERICA framework achieves consistent improvements on several document-level language understanding tasks.
arXiv Detail & Related papers (2020-12-30T03:35:22Z) - Infusing Finetuning with Semantic Dependencies [62.37697048781823]
We show that, unlike syntax, semantics is not brought to the surface by today's pretrained models.
We then use convolutional graph encoders to explicitly incorporate semantic parses into task-specific finetuning.
arXiv Detail & Related papers (2020-12-10T01:27:24Z) - Unsupervised Paraphrasing with Pretrained Language Models [85.03373221588707]
We propose a training pipeline that enables pre-trained language models to generate high-quality paraphrases in an unsupervised setting.
Our recipe consists of task-adaptation, self-supervision, and a novel decoding algorithm named Dynamic Blocking.
We show with automatic and human evaluations that our approach achieves state-of-the-art performance on both the Quora Question Pair and the ParaNMT datasets.
arXiv Detail & Related papers (2020-10-24T11:55:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.