Awakening Latent Grounding from Pretrained Language Models for Semantic
Parsing
- URL: http://arxiv.org/abs/2109.10540v1
- Date: Wed, 22 Sep 2021 06:46:29 GMT
- Title: Awakening Latent Grounding from Pretrained Language Models for Semantic
Parsing
- Authors: Qian Liu, Dejian Yang, Jiahui Zhang, Jiaqi Guo, Bin Zhou, Jian-Guang
Lou
- Abstract summary: We show that pretrained language models (PLMs) can discover which token should be grounded to which concept.
Our approach can awaken latent grounding which is understandable to human experts, even if not exposed to such labels during training.
- Score: 28.9192914266566
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recent years pretrained language models (PLMs) hit a success on several
downstream tasks, showing their power on modeling language. To better
understand and leverage what PLMs have learned, several techniques have emerged
to explore syntactic structures entailed by PLMs. However, few efforts have
been made to explore grounding capabilities of PLMs, which are also essential.
In this paper, we highlight the ability of PLMs to discover which token should
be grounded to which concept, if combined with our proposed
erasing-then-awakening approach. Empirical studies on four datasets demonstrate
that our approach can awaken latent grounding which is understandable to human
experts, even if it is not exposed to such labels during training. More
importantly, our approach shows great potential to benefit downstream semantic
parsing models. Taking text-to-SQL as a case study, we successfully couple our
approach with two off-the-shelf parsers, obtaining an absolute improvement of
up to 9.8%.
Related papers
- Simple Techniques for Enhancing Sentence Embeddings in Generative Language Models [3.0566617373924325]
Sentence embedding is a fundamental task within the realm of Natural Language Processing, finding extensive application in search engines, expert systems, and question-and-answer platforms.
With the continuous evolution of large language models such as LLaMA and Mistral, research on sentence embedding has recently achieved notable breakthroughs.
We propose two innovative prompt engineering techniques capable of further enhancing the expressive power of PLMs' raw embeddings.
arXiv Detail & Related papers (2024-04-05T07:07:15Z) - A Survey on Prompting Techniques in LLMs [0.0]
Autoregressive Large Language Models have transformed the landscape of Natural Language Processing.
We present a taxonomy of existing literature on prompting techniques and provide a concise survey based on this taxonomy.
We identify some open problems in the realm of prompting in autoregressive LLMs which could serve as a direction for future research.
arXiv Detail & Related papers (2023-11-28T17:56:34Z) - Aligning Large Language Models with Human: A Survey [53.6014921995006]
Large Language Models (LLMs) trained on extensive textual corpora have emerged as leading solutions for a broad array of Natural Language Processing (NLP) tasks.
Despite their notable performance, these models are prone to certain limitations such as misunderstanding human instructions, generating potentially biased content, or factually incorrect information.
This survey presents a comprehensive overview of these alignment technologies, including the following aspects.
arXiv Detail & Related papers (2023-07-24T17:44:58Z) - How Does Pretraining Improve Discourse-Aware Translation? [41.20896077662125]
We introduce a probing task to interpret the ability of pretrained language models to capture discourse relation knowledge.
We validate three state-of-the-art PLMs across encoder-, decoder-, and encoder-decoder-based models.
Our findings are instructive to understand how and when discourse knowledge in PLMs should work for downstream tasks.
arXiv Detail & Related papers (2023-05-31T13:36:51Z) - Knowledge Rumination for Pre-trained Language Models [77.55888291165462]
We propose a new paradigm dubbed Knowledge Rumination to help the pre-trained language model utilize related latent knowledge without retrieving it from the external corpus.
We apply the proposed knowledge rumination to various language models, including RoBERTa, DeBERTa, and GPT-3.
arXiv Detail & Related papers (2023-05-15T15:47:09Z) - A Survey of Knowledge Enhanced Pre-trained Language Models [78.56931125512295]
We present a comprehensive review of Knowledge Enhanced Pre-trained Language Models (KE-PLMs)
For NLU, we divide the types of knowledge into four categories: linguistic knowledge, text knowledge, knowledge graph (KG) and rule knowledge.
The KE-PLMs for NLG are categorized into KG-based and retrieval-based methods.
arXiv Detail & Related papers (2022-11-11T04:29:02Z) - Language Model Pre-Training with Sparse Latent Typing [66.75786739499604]
We propose a new pre-training objective, Sparse Latent Typing, which enables the model to sparsely extract sentence-level keywords with diverse latent types.
Experimental results show that our model is able to learn interpretable latent type categories in a self-supervised manner without using any external knowledge.
arXiv Detail & Related papers (2022-10-23T00:37:08Z) - ElitePLM: An Empirical Study on General Language Ability Evaluation of
Pretrained Language Models [78.08792285698853]
We present a large-scale empirical study on general language ability evaluation of pretrained language models (ElitePLM)
Our empirical results demonstrate that: (1) PLMs with varying training objectives and strategies are good at different ability tests; (2) fine-tuning PLMs in downstream tasks is usually sensitive to the data size and distribution; and (3) PLMs have excellent transferability between similar tasks.
arXiv Detail & Related papers (2022-05-03T14:18:10Z) - Knowledgeable Salient Span Mask for Enhancing Language Models as
Knowledge Base [51.55027623439027]
We develop two solutions to help the model learn more knowledge from unstructured text in a fully self-supervised manner.
To our best knowledge, we are the first to explore fully self-supervised learning of knowledge in continual pre-training.
arXiv Detail & Related papers (2022-04-17T12:33:34Z) - Knowledge Enhanced Pretrained Language Models: A Compreshensive Survey [8.427521246916463]
Pretrained Language Models (PLM) have established a new paradigm through learning informative representations on large-scale text corpus.
This new paradigm has revolutionized the entire field of natural language processing, and set the new state-of-the-art performance for a wide variety of NLP tasks.
To address this issue, integrating knowledge into PLMs have recently become a very active research area and a variety of approaches have been developed.
arXiv Detail & Related papers (2021-10-16T03:27:56Z) - Multilingual Chart-based Constituency Parse Extraction from Pre-trained
Language Models [21.2879567125422]
We propose a novel method for extracting complete (binary) parses from pre-trained language models.
By applying our method on multilingual PLMs, it becomes possible to induce non-trivial parses for sentences from nine languages.
arXiv Detail & Related papers (2020-04-08T05:42:26Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.