VOGUE: Answer Verbalization through Multi-Task Learning
- URL: http://arxiv.org/abs/2106.13316v2
- Date: Mon, 28 Jun 2021 16:12:34 GMT
- Title: VOGUE: Answer Verbalization through Multi-Task Learning
- Authors: Endri Kacupaj, Shyamnath Premnadh, Kuldeep Singh, Jens Lehmann, Maria
Maleshkova
- Abstract summary: We propose a multi-task-based answer verbalization framework: VOGUE (VerbalizationOuGh mUlti-task lEarning)
Our framework can generate results based on using questions and queries as inputs concurrently.
We evaluate our framework on existing datasets for answer verbalization, and it outperforms all current baselines on both BLEU and METEOR scores.
- Score: 4.882444194224553
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In recent years, there have been significant developments in Question
Answering over Knowledge Graphs (KGQA). Despite all the notable advancements,
current KGQA systems only focus on answer generation techniques and not on
answer verbalization. However, in real-world scenarios (e.g., voice assistants
such as Alexa, Siri, etc.), users prefer verbalized answers instead of a
generated response. This paper addresses the task of answer verbalization for
(complex) question answering over knowledge graphs. In this context, we propose
a multi-task-based answer verbalization framework: VOGUE (Verbalization thrOuGh
mUlti-task lEarning). The VOGUE framework attempts to generate a verbalized
answer using a hybrid approach through a multi-task learning paradigm. Our
framework can generate results based on using questions and queries as inputs
concurrently. VOGUE comprises four modules that are trained simultaneously
through multi-task learning. We evaluate our framework on existing datasets for
answer verbalization, and it outperforms all current baselines on both BLEU and
METEOR scores.
Related papers
- Learning When to Retrieve, What to Rewrite, and How to Respond in Conversational QA [16.1357049130957]
We build on the single-turn SELF-RAG framework and propose SELF-multi-RAG for conversational settings.
SELF-multi-RAG demonstrates improved capabilities over single-turn variants with respect to retrieving relevant passages.
arXiv Detail & Related papers (2024-09-23T20:05:12Z) - Language Guided Visual Question Answering: Elevate Your Multimodal
Language Model Using Knowledge-Enriched Prompts [54.072432123447854]
Visual question answering (VQA) is the task of answering questions about an image.
Answering the question requires commonsense knowledge, world knowledge, and reasoning about ideas and concepts not present in the image.
We propose a framework that uses language guidance (LG) in the form of rationales, image captions, scene graphs, etc to answer questions more accurately.
arXiv Detail & Related papers (2023-10-31T03:54:11Z) - Improving Question Generation with Multi-level Content Planning [70.37285816596527]
This paper addresses the problem of generating questions from a given context and an answer, specifically focusing on questions that require multi-hop reasoning across an extended context.
We propose MultiFactor, a novel QG framework based on multi-level content planning. Specifically, MultiFactor includes two components: FA-model, which simultaneously selects key phrases and generates full answers, and Q-model which takes the generated full answer as an additional input to generate questions.
arXiv Detail & Related papers (2023-10-20T13:57:01Z) - Answering Ambiguous Questions via Iterative Prompting [84.3426020642704]
In open-domain question answering, due to the ambiguity of questions, multiple plausible answers may exist.
One approach is to directly predict all valid answers, but this can struggle with balancing relevance and diversity.
We present AmbigPrompt to address the imperfections of existing approaches to answering ambiguous questions.
arXiv Detail & Related papers (2023-07-08T04:32:17Z) - Semantic Parsing for Conversational Question Answering over Knowledge
Graphs [63.939700311269156]
We develop a dataset where user questions are annotated with Sparql parses and system answers correspond to execution results thereof.
We present two different semantic parsing approaches and highlight the challenges of the task.
Our dataset and models are released at https://github.com/Edinburgh/SPICE.
arXiv Detail & Related papers (2023-01-28T14:45:11Z) - UniKGQA: Unified Retrieval and Reasoning for Solving Multi-hop Question
Answering Over Knowledge Graph [89.98762327725112]
Multi-hop Question Answering over Knowledge Graph(KGQA) aims to find the answer entities that are multiple hops away from the topic entities mentioned in a natural language question.
We propose UniKGQA, a novel approach for multi-hop KGQA task, by unifying retrieval and reasoning in both model architecture and parameter learning.
arXiv Detail & Related papers (2022-12-02T04:08:09Z) - An Answer Verbalization Dataset for Conversational Question Answerings
over Knowledge Graphs [9.979689965471428]
This paper contributes to the state-of-the-art by extending an existing ConvQA dataset with verbalized answers.
We perform experiments with five sequence-to-sequence models on generating answer responses while maintaining grammatical correctness.
arXiv Detail & Related papers (2022-08-13T21:21:28Z) - Answer-Me: Multi-Task Open-Vocabulary Visual Question Answering [43.07139534653485]
We present Answer-Me, a task-aware multi-task framework.
We pre-train a vision-language joint model, which is multi-task as well.
Results show state-of-the-art performance, zero-shot generalization, robustness to forgetting, and competitive single-task results.
arXiv Detail & Related papers (2022-05-02T14:53:13Z) - End-to-end Spoken Conversational Question Answering: Task, Dataset and
Model [92.18621726802726]
In spoken question answering, the systems are designed to answer questions from contiguous text spans within the related speech transcripts.
We propose a new Spoken Conversational Question Answering task (SCQA), aiming at enabling the systems to model complex dialogue flows.
Our main objective is to build the system to deal with conversational questions based on the audio recordings, and to explore the plausibility of providing more cues from different modalities with systems in information gathering.
arXiv Detail & Related papers (2022-04-29T17:56:59Z) - ParaQA: A Question Answering Dataset with Paraphrase Responses for
Single-Turn Conversation [5.087932295628364]
ParaQA is a dataset with multiple paraphrased responses for single-turn conversation over knowledge graphs (KG)
The dataset was created using a semi-automated framework for generating diverse paraphrasing of the answers using techniques such as back-translation.
arXiv Detail & Related papers (2021-03-13T18:53:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.