Exploring Answer Information Methods for Question Generation with
Transformers
- URL: http://arxiv.org/abs/2312.03483v1
- Date: Wed, 6 Dec 2023 13:26:16 GMT
- Title: Exploring Answer Information Methods for Question Generation with
Transformers
- Authors: Talha Chafekar, Aafiya Hussain, Grishma Sharma, Deepak Sharma
- Abstract summary: We use three different methods and their combinations for incorporating answer information and explore their effect on several automatic evaluation metrics.
We observe that answer prompting without any additional modes obtains the best scores across rouge, meteor scores.
We use a custom metric to calculate how many of the generated questions have the same answer, as the answer which is used to generate them.
- Score: 0.5904095466127044
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: There has been a lot of work in question generation where different methods
to provide target answers as input, have been employed. This experimentation
has been mostly carried out for RNN based models. We use three different
methods and their combinations for incorporating answer information and explore
their effect on several automatic evaluation metrics. The methods that are used
are answer prompting, using a custom product method using answer embeddings and
encoder outputs, choosing sentences from the input paragraph that have answer
related information, and using a separate cross-attention attention block in
the decoder which attends to the answer. We observe that answer prompting
without any additional modes obtains the best scores across rouge, meteor
scores. Additionally, we use a custom metric to calculate how many of the
generated questions have the same answer, as the answer which is used to
generate them.
Related papers
- Answering Ambiguous Questions via Iterative Prompting [84.3426020642704]
In open-domain question answering, due to the ambiguity of questions, multiple plausible answers may exist.
One approach is to directly predict all valid answers, but this can struggle with balancing relevance and diversity.
We present AmbigPrompt to address the imperfections of existing approaches to answering ambiguous questions.
arXiv Detail & Related papers (2023-07-08T04:32:17Z) - Federated Prompting and Chain-of-Thought Reasoning for Improving LLMs
Answering [13.735277588793997]
We investigate how to enhance answer precision in frequently asked questions posed by distributed users using cloud-based Large Language Models (LLMs)
Our study focuses on a typical situations where users ask similar queries that involve identical mathematical reasoning steps and problem-solving procedures.
We propose to improve the distributed synonymous questions using Self-Consistency (SC) and Chain-of-Thought (CoT) techniques.
arXiv Detail & Related papers (2023-04-27T01:48:03Z) - Diverse Multi-Answer Retrieval with Determinantal Point Processes [11.925050407713597]
We propose a re-ranking based approach using Determinantal point processes utilizing BERT as kernels.
Results demonstrate that our re-ranking technique outperforms state-of-the-art method on the AmbigQA dataset.
arXiv Detail & Related papers (2022-11-29T08:54:05Z) - A Semantic-based Method for Unsupervised Commonsense Question Answering [40.18557352036813]
Unsupervised commonsense question answering is appealing since it does not rely on any labeled task data.
We present a novel SEmantic-based Question Answering method (SEQA) for unsupervised commonsense question answering.
arXiv Detail & Related papers (2021-05-31T08:21:52Z) - Just Ask: Learning to Answer Questions from Millions of Narrated Videos [97.44376735445454]
We propose to avoid manual annotation and generate a large-scale training dataset for video question answering.
We leverage a question generation transformer trained on text data and use it to generate question-answer pairs from transcribed video narrations.
We show our method to significantly outperform the state of the art on MSRVTT-QA, MSVD-QA, ActivityNet-QA and How2QA.
arXiv Detail & Related papers (2020-12-01T12:59:20Z) - Diverse and Non-redundant Answer Set Extraction on Community QA based on
DPPs [18.013010857062643]
In community-based question answering platforms, it takes time for a user to get useful information from among many answers.
This paper proposes a new task of selecting a diverse and non-redundant answer set rather than ranking the answers.
arXiv Detail & Related papers (2020-11-18T07:33:03Z) - Meaningful Answer Generation of E-Commerce Question-Answering [77.89755281215079]
In e-commerce portals, generating answers for product-related questions has become a crucial task.
In this paper, we propose a novel generative neural model, called the Meaningful Product Answer Generator (MPAG)
MPAG alleviates the safe answer problem by taking product reviews, product attributes, and a prototype answer into consideration.
arXiv Detail & Related papers (2020-11-14T14:05:30Z) - Tag and Correct: Question aware Open Information Extraction with
Two-stage Decoding [73.24783466100686]
Question Open IE takes question and passage as inputs, outputting an answer which contains a subject, a predicate, and one or more arguments.
The semistructured answer has two advantages which are more readable and falsifiable compared to span answer.
One is an extractive method which extracts candidate answers from the passage with the Open IE model, and ranks them by matching with questions.
The other is the generative method which uses a sequence to sequence model to generate answers directly.
arXiv Detail & Related papers (2020-09-16T00:58:13Z) - Crossing Variational Autoencoders for Answer Retrieval [50.17311961755684]
Question-answer alignment and question/answer semantics are two important signals for learning the representations.
We propose to cross variational auto-encoders by generating questions with aligned answers and generating answers with aligned questions.
arXiv Detail & Related papers (2020-05-06T01:59:13Z) - KPQA: A Metric for Generative Question Answering Using Keyphrase Weights [64.54593491919248]
KPQA-metric is a new metric for evaluating correctness of generative question answering systems.
Our new metric assigns different weights to each token via keyphrase prediction.
We show that our proposed metric has a significantly higher correlation with human judgments than existing metrics.
arXiv Detail & Related papers (2020-05-01T03:24:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.