NL-EDIT: Correcting semantic parse errors through natural language
interaction
- URL: http://arxiv.org/abs/2103.14540v1
- Date: Fri, 26 Mar 2021 15:45:46 GMT
- Title: NL-EDIT: Correcting semantic parse errors through natural language
interaction
- Authors: Ahmed Elgohary, Christopher Meek, Matthew Richardson, Adam Fourney,
Gonzalo Ramos and Ahmed Hassan Awadallah
- Abstract summary: We present NL-EDIT, a model for interpreting natural language feedback in the interaction context.
We show that NL-EDIT can boost the accuracy of existing text-to-allies by up to 20% with only one turn of correction.
- Score: 28.333860779302306
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We study semantic parsing in an interactive setting in which users correct
errors with natural language feedback. We present NL-EDIT, a model for
interpreting natural language feedback in the interaction context to generate a
sequence of edits that can be applied to the initial parse to correct its
errors. We show that NL-EDIT can boost the accuracy of existing text-to-SQL
parsers by up to 20% with only one turn of correction. We analyze the
limitations of the model and discuss directions for improvement and evaluation.
The code and datasets used in this paper are publicly available at
http://aka.ms/NLEdit.
Related papers
- UVL Sentinel: a tool for parsing and syntactic correction of UVL datasets [1.1821195547818244]
Feature models have become a de facto standard for representing variability in software product lines.
UVL (Universal Variability Language) is a language which expresses the features, dependencies, and constraints between them.
UVL Sentinel analyzes a dataset of feature models in UVL format, generating error analysis reports, describing those errors and, eventually, a syntactic processing that applies the most common solutions.
arXiv Detail & Related papers (2024-03-27T11:56:08Z) - Correcting Semantic Parses with Natural Language through Dynamic Schema
Encoding [0.06445605125467573]
We show that the accuracy of autoregressive decoders can be boosted by up to 26% with only one turn of correction with natural language.
A Tbase model is capable of correcting the errors of a T5-large model in a zero-shot, cross-parser setting.
arXiv Detail & Related papers (2023-05-31T16:01:57Z) - Learning to Simulate Natural Language Feedback for Interactive Semantic
Parsing [30.609805601567178]
We propose a new task simulating NL feedback for interactive semantic parsing.
We accompany the task with a novel feedback evaluator.
Our feedback simulator can help achieve comparable error correction performance as trained using the costly, full set of human annotations.
arXiv Detail & Related papers (2023-05-14T16:20:09Z) - Towards Fine-Grained Information: Identifying the Type and Location of
Translation Errors [80.22825549235556]
Existing approaches can not synchronously consider error position and type.
We build an FG-TED model to predict the textbf addition and textbfomission errors.
Experiments show that our model can identify both error type and position concurrently, and gives state-of-the-art results.
arXiv Detail & Related papers (2023-02-17T16:20:33Z) - Language Anisotropic Cross-Lingual Model Editing [61.51863835749279]
Existing work only studies the monolingual scenario, which lacks the cross-lingual transferability to perform editing simultaneously across languages.
We propose a framework to naturally adapt monolingual model editing approaches to the cross-lingual scenario using parallel corpus.
We empirically demonstrate the failure of monolingual baselines in propagating the edit to multiple languages and the effectiveness of the proposed language anisotropic model editing.
arXiv Detail & Related papers (2022-05-25T11:38:12Z) - Understanding by Understanding Not: Modeling Negation in Language Models [81.21351681735973]
Negation is a core construction in natural language.
We propose to augment the language modeling objective with an unlikelihood objective that is based on negated generic sentences.
We reduce the mean top1 error rate to 4% on the negated LAMA dataset.
arXiv Detail & Related papers (2021-05-07T21:58:35Z) - Text Editing by Command [82.50904226312451]
A prevailing paradigm in neural text generation is one-shot generation, where text is produced in a single step.
We address this limitation with an interactive text generation setting in which the user interacts with the system by issuing commands to edit existing text.
We show that our Interactive Editor, a transformer-based model trained on this dataset, outperforms baselines and obtains positive results in both automatic and human evaluations.
arXiv Detail & Related papers (2020-10-24T08:00:30Z) - On the Robustness of Language Encoders against Grammatical Errors [66.05648604987479]
We collect real grammatical errors from non-native speakers and conduct adversarial attacks to simulate these errors on clean text data.
Results confirm that the performance of all tested models is affected but the degree of impact varies.
arXiv Detail & Related papers (2020-05-12T11:01:44Z) - Speak to your Parser: Interactive Text-to-SQL with Natural Language
Feedback [39.45695779589969]
We study the task of semantic parse correction with natural language feedback.
In this paper, we investigate a more interactive scenario where humans can further interact with the system.
arXiv Detail & Related papers (2020-05-05T23:58:09Z) - Towards Minimal Supervision BERT-based Grammar Error Correction [81.90356787324481]
We try to incorporate contextual information from pre-trained language model to leverage annotation and benefit multilingual scenarios.
Results show strong potential of Bidirectional Representations from Transformers (BERT) in grammatical error correction task.
arXiv Detail & Related papers (2020-01-10T15:45:59Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.