Speak to your Parser: Interactive Text-to-SQL with Natural Language
Feedback
- URL: http://arxiv.org/abs/2005.02539v2
- Date: Mon, 1 Jun 2020 22:01:15 GMT
- Title: Speak to your Parser: Interactive Text-to-SQL with Natural Language
Feedback
- Authors: Ahmed Elgohary, Saghar Hosseini, Ahmed Hassan Awadallah
- Abstract summary: We study the task of semantic parse correction with natural language feedback.
In this paper, we investigate a more interactive scenario where humans can further interact with the system.
- Score: 39.45695779589969
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We study the task of semantic parse correction with natural language
feedback. Given a natural language utterance, most semantic parsing systems
pose the problem as one-shot translation where the utterance is mapped to a
corresponding logical form. In this paper, we investigate a more interactive
scenario where humans can further interact with the system by providing
free-form natural language feedback to correct the system when it generates an
inaccurate interpretation of an initial utterance. We focus on natural language
to SQL systems and construct, SPLASH, a dataset of utterances, incorrect SQL
interpretations and the corresponding natural language feedback. We compare
various reference models for the correction task and show that incorporating
such a rich form of feedback can significantly improve the overall semantic
parsing accuracy while retaining the flexibility of natural language
interaction. While we estimated human correction accuracy is 81.5%, our best
model achieves only 25.1%, which leaves a large gap for improvement in future
research. SPLASH is publicly available at https://aka.ms/Splash_dataset.
Related papers
- Correcting Semantic Parses with Natural Language through Dynamic Schema
Encoding [0.06445605125467573]
We show that the accuracy of autoregressive decoders can be boosted by up to 26% with only one turn of correction with natural language.
A Tbase model is capable of correcting the errors of a T5-large model in a zero-shot, cross-parser setting.
arXiv Detail & Related papers (2023-05-31T16:01:57Z) - Learning to Simulate Natural Language Feedback for Interactive Semantic
Parsing [30.609805601567178]
We propose a new task simulating NL feedback for interactive semantic parsing.
We accompany the task with a novel feedback evaluator.
Our feedback simulator can help achieve comparable error correction performance as trained using the costly, full set of human annotations.
arXiv Detail & Related papers (2023-05-14T16:20:09Z) - Retrieval-based Disentangled Representation Learning with Natural
Language Supervision [61.75109410513864]
We present Vocabulary Disentangled Retrieval (VDR), a retrieval-based framework that harnesses natural language as proxies of the underlying data variation to drive disentangled representation learning.
Our approach employ a bi-encoder model to represent both data and natural language in a vocabulary space, enabling the model to distinguish intrinsic dimensions that capture characteristics within data through its natural language counterpart, thus disentanglement.
arXiv Detail & Related papers (2022-12-15T10:20:42Z) - Linking Emergent and Natural Languages via Corpus Transfer [98.98724497178247]
We propose a novel way to establish a link by corpus transfer between emergent languages and natural languages.
Our approach showcases non-trivial transfer benefits for two different tasks -- language modeling and image captioning.
We also introduce a novel metric to predict the transferability of an emergent language by translating emergent messages to natural language captions grounded on the same images.
arXiv Detail & Related papers (2022-03-24T21:24:54Z) - Contextual Semantic Parsing for Multilingual Task-Oriented Dialogues [7.8378818005171125]
Given a large-scale dialogue data set in one language, we can automatically produce an effective semantic for other languages using machine translation.
We propose automatic translation of dialogue datasets with alignment to ensure faithful translation of slot values.
We show that the succinct representation reduces the compounding effect of translation errors.
arXiv Detail & Related papers (2021-11-04T01:08:14Z) - Turing: an Accurate and Interpretable Multi-Hypothesis Cross-Domain
Natural Language Database Interface [11.782395912109324]
Natural language database interface (NLDB) can democratize data-driven insights for non-technical users.
This work presents Turing, a NLDB system toward bridging this gap.
The cross-domain semantic validation method of Turing achieves $751%$ execution accuracy, and $78.3%$ top-5 beam execution accuracy on the Spider set.
arXiv Detail & Related papers (2021-06-08T17:46:20Z) - NL-EDIT: Correcting semantic parse errors through natural language
interaction [28.333860779302306]
We present NL-EDIT, a model for interpreting natural language feedback in the interaction context.
We show that NL-EDIT can boost the accuracy of existing text-to-allies by up to 20% with only one turn of correction.
arXiv Detail & Related papers (2021-03-26T15:45:46Z) - Comparison of Interactive Knowledge Base Spelling Correction Models for
Low-Resource Languages [81.90356787324481]
Spelling normalization for low resource languages is a challenging task because the patterns are hard to predict.
This work shows a comparison of a neural model and character language models with varying amounts on target language data.
Our usage scenario is interactive correction with nearly zero amounts of training examples, improving models as more data is collected.
arXiv Detail & Related papers (2020-10-20T17:31:07Z) - Photon: A Robust Cross-Domain Text-to-SQL System [189.1405317853752]
We present Photon, a robust, modular, cross-domain NLIDB that can flag natural language input to which a mapping cannot be immediately determined.
The proposed method effectively improves the robustness of text-to-native system against untranslatable user input.
arXiv Detail & Related papers (2020-07-30T07:44:48Z) - On the Importance of Word Order Information in Cross-lingual Sequence
Labeling [80.65425412067464]
Cross-lingual models that fit into the word order of the source language might fail to handle target languages.
We investigate whether making models insensitive to the word order of the source language can improve the adaptation performance in target languages.
arXiv Detail & Related papers (2020-01-30T03:35:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.