IQA: Interactive Query Construction in Semantic Question Answering
Systems
- URL: http://arxiv.org/abs/2006.11534v3
- Date: Thu, 25 Jun 2020 05:17:03 GMT
- Title: IQA: Interactive Query Construction in Semantic Question Answering
Systems
- Authors: Hamid Zafar, Mohnish Dubey, Jens Lehmann, Elena Demidova
- Abstract summary: We introduce IQA - an interaction scheme for SQA pipelines.
We show that even a small number of user interactions can lead to significant improvements in the performance of SQA systems.
- Score: 8.961129460639999
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Semantic Question Answering (SQA) systems automatically interpret user
questions expressed in a natural language in terms of semantic queries. This
process involves uncertainty, such that the resulting queries do not always
accurately match the user intent, especially for more complex and less common
questions. In this article, we aim to empower users in guiding SQA systems
towards the intended semantic queries through interaction. We introduce IQA -
an interaction scheme for SQA pipelines. This scheme facilitates seamless
integration of user feedback in the question answering process and relies on
Option Gain - a novel metric that enables efficient and intuitive user
interaction. Our evaluation shows that using the proposed scheme, even a small
number of user interactions can lead to significant improvements in the
performance of SQA systems.
Related papers
- Does This Summary Answer My Question? Modeling Query-Focused Summary Readers with Rational Speech Acts [19.010077275314668]
We adapt the Rational Speech Act (RSA) framework, a model of human communication, to explicitly model a reader's understanding of a generated summary.
We introduce the answer reconstruction objective which approximates a reader's understanding of a summary by their ability to use it to reconstruct the answer to their initial query.
arXiv Detail & Related papers (2024-11-10T16:48:21Z) - SQUARE: Automatic Question Answering Evaluation using Multiple Positive
and Negative References [73.67707138779245]
We propose a new evaluation metric: SQuArE (Sentence-level QUestion AnsweRing Evaluation)
We evaluate SQuArE on both sentence-level extractive (Answer Selection) and generative (GenQA) QA systems.
arXiv Detail & Related papers (2023-09-21T16:51:30Z) - Evaluation of Question Answering Systems: Complexity of judging a
natural language [3.4771957347698583]
Question answering (QA) systems are among the most important and rapidly developing research topics in natural language processing (NLP)
This survey attempts to provide a systematic overview of the general framework of QA, QA paradigms, benchmark datasets, and assessment techniques for a quantitative evaluation of QA systems.
arXiv Detail & Related papers (2022-09-10T12:29:04Z) - ProQA: Structural Prompt-based Pre-training for Unified Question
Answering [84.59636806421204]
ProQA is a unified QA paradigm that solves various tasks through a single model.
It concurrently models the knowledge generalization for all QA tasks while keeping the knowledge customization for every specific QA task.
ProQA consistently boosts performance on both full data fine-tuning, few-shot learning, and zero-shot testing scenarios.
arXiv Detail & Related papers (2022-05-09T04:59:26Z) - An Initial Investigation of Non-Native Spoken Question-Answering [36.89541375786233]
We show that a simple text-based ELECTRA MC model trained on SQuAD2.0 transfers well for spoken question answering tests.
One significant challenge is the lack of appropriately annotated speech corpora to train systems for this task.
Mismatches must be considered between text documents and spoken responses; non-native spoken grammar and written grammar.
arXiv Detail & Related papers (2021-07-09T21:59:16Z) - Distantly Supervised Transformers For E-Commerce Product QA [5.460297795256275]
We propose a practical instant question answering (QA) system on product pages of ecommerce services.
For each user query, relevant community question answer (CQA) pairs are retrieved.
Our proposed transformer-based model learns a robust relevance function by jointly learning unified syntactic and semantic representations.
arXiv Detail & Related papers (2021-04-07T06:37:16Z) - NoiseQA: Challenge Set Evaluation for User-Centric Question Answering [68.67783808426292]
We show that components in the pipeline that precede an answering engine can introduce varied and considerable sources of error.
We conclude that there is substantial room for progress before QA systems can be effectively deployed.
arXiv Detail & Related papers (2021-02-16T18:35:29Z) - Question Answering over Knowledge Bases by Leveraging Semantic Parsing
and Neuro-Symbolic Reasoning [73.00049753292316]
We propose a semantic parsing and reasoning-based Neuro-Symbolic Question Answering(NSQA) system.
NSQA achieves state-of-the-art performance on QALD-9 and LC-QuAD 1.0.
arXiv Detail & Related papers (2020-12-03T05:17:55Z) - Improving Conversational Question Answering Systems after Deployment
using Feedback-Weighted Learning [69.42679922160684]
We propose feedback-weighted learning based on importance sampling to improve upon an initial supervised system using binary user feedback.
Our work opens the prospect to exploit interactions with real users and improve conversational systems after deployment.
arXiv Detail & Related papers (2020-11-01T19:50:34Z) - Towards Data Distillation for End-to-end Spoken Conversational Question
Answering [65.124088336738]
We propose a new Spoken Conversational Question Answering task (SCQA)
SCQA aims at enabling QA systems to model complex dialogues flow given the speech utterances and text corpora.
Our main objective is to build a QA system to deal with conversational questions both in spoken and text forms.
arXiv Detail & Related papers (2020-10-18T05:53:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.