Few-Shot Semantic Parsing with Language Models Trained On Code
- URL: http://arxiv.org/abs/2112.08696v1
- Date: Thu, 16 Dec 2021 08:34:06 GMT
- Title: Few-Shot Semantic Parsing with Language Models Trained On Code
- Authors: Richard Shin, Benjamin Van Durme
- Abstract summary: We find that Codex performs better at semantic parsing than equivalent GPT-3 models.
We find that unlike GPT-3, Codex performs similarly when targeting meaning representations directly, perhaps as meaning representations used in semantic parsing are structured similar to code.
- Score: 52.23355024995237
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Large language models, prompted with in-context examples, can perform
semantic parsing with little training data. They do better when we formulate
the problem as paraphrasing into canonical utterances, which cast the
underlying meaning representations into a controlled natural language-like
representation. Intuitively, such models can more easily output canonical
utterances as they are closer to the natural language used for pre-training.
More recently, models also pre-trained on code, like OpenAI Codex, have risen
in prominence. Since accurately modeling code requires understanding of
executable semantics. such models may prove more adept at semantic parsing. In
this paper, we test this hypothesis and find that Codex performs better at
semantic parsing than equivalent GPT-3 models. We find that unlike GPT-3, Codex
performs similarly when targeting meaning representations directly, perhaps as
meaning representations used in semantic parsing are structured similar to
code.
Related papers
- Meaning Representations from Trajectories in Autoregressive Models [106.63181745054571]
We propose to extract meaning representations from autoregressive language models by considering the distribution of all possible trajectories extending an input text.
This strategy is prompt-free, does not require fine-tuning, and is applicable to any pre-trained autoregressive model.
We empirically show that the representations obtained from large models align well with human annotations, outperform other zero-shot and prompt-free methods on semantic similarity tasks, and can be used to solve more complex entailment and containment tasks that standard embeddings cannot handle.
arXiv Detail & Related papers (2023-10-23T04:35:58Z) - Towards Understanding What Code Language Models Learned [10.989953856458996]
Pre-trained language models are effective in a variety of natural language tasks.
It has been argued their capabilities fall short of fully learning meaning or understanding language.
We investigate their ability to capture semantics of code beyond superficial frequency and co-occurrence.
arXiv Detail & Related papers (2023-06-20T23:42:14Z) - Zero and Few-shot Semantic Parsing with Ambiguous Inputs [45.285508941560295]
We introduce AmP, a framework, dataset, and challenge for translating ambiguous natural language to formal representations like logic and code.
Using AmP, we investigate how several few-shot text-to-code systems handle ambiguity, introducing three new metrics.
We find that large pre-trained models perform poorly at capturing the distribution of possible meanings without deliberate instruction.
arXiv Detail & Related papers (2023-06-01T15:46:36Z) - On Robustness of Prompt-based Semantic Parsing with Large Pre-trained
Language Model: An Empirical Study on Codex [48.588772371355816]
This paper presents the first empirical study on the adversarial robustness of a large prompt-based language model of code, codex.
Our results demonstrate that the state-of-the-art (SOTA) code-language models are vulnerable to carefully crafted adversarial examples.
arXiv Detail & Related papers (2023-01-30T13:21:00Z) - On The Ingredients of an Effective Zero-shot Semantic Parser [95.01623036661468]
We analyze zero-shot learning by paraphrasing training examples of canonical utterances and programs from a grammar.
We propose bridging these gaps using improved grammars, stronger paraphrasers, and efficient learning methods.
Our model achieves strong performance on two semantic parsing benchmarks (Scholar, Geo) with zero labeled data.
arXiv Detail & Related papers (2021-10-15T21:41:16Z) - Constrained Language Models Yield Few-Shot Semantic Parsers [73.50960967598654]
We explore the use of large pretrained language models as few-shot semantics.
The goal in semantic parsing is to generate a structured meaning representation given a natural language input.
We use language models to paraphrase inputs into a controlled sublanguage resembling English that can be automatically mapped to a target meaning representation.
arXiv Detail & Related papers (2021-04-18T08:13:06Z) - Infusing Finetuning with Semantic Dependencies [62.37697048781823]
We show that, unlike syntax, semantics is not brought to the surface by today's pretrained models.
We then use convolutional graph encoders to explicitly incorporate semantic parses into task-specific finetuning.
arXiv Detail & Related papers (2020-12-10T01:27:24Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.