CONSISTENT: Open-Ended Question Generation From News Articles
- URL: http://arxiv.org/abs/2210.11536v1
- Date: Thu, 20 Oct 2022 19:10:07 GMT
- Title: CONSISTENT: Open-Ended Question Generation From News Articles
- Authors: Tuhin Chakrabarty, Justin Lewis, Smaranda Muresan
- Abstract summary: We propose CONSISTENT, a new end-to-end system for generating open-ended questions.
We demonstrate our model's strength over several baselines using both automatic and human=based evaluations.
We discuss potential downstream applications for news media organizations.
- Score: 38.41162895492449
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Recent work on question generation has largely focused on factoid questions
such as who, what, where, when about basic facts. Generating open-ended why,
how, what, etc. questions that require long-form answers have proven more
difficult. To facilitate the generation of open-ended questions, we propose
CONSISTENT, a new end-to-end system for generating open-ended questions that
are answerable from and faithful to the input text. Using news articles as a
trustworthy foundation for experimentation, we demonstrate our model's strength
over several baselines using both automatic and human=based evaluations. We
contribute an evaluation dataset of expert-generated open-ended questions.We
discuss potential downstream applications for news media organizations.
Related papers
- Which questions should I answer? Salience Prediction of Inquisitive Questions [118.097974193544]
We show that highly salient questions are empirically more likely to be answered in the same article.
We further validate our findings by showing that answering salient questions is an indicator of summarization quality in news.
arXiv Detail & Related papers (2024-04-16T21:33:05Z) - FOLLOWUPQG: Towards Information-Seeking Follow-up Question Generation [38.78216651059955]
We introduce the task of real-world information-seeking follow-up question generation (FQG)
We construct FOLLOWUPQG, a dataset of over 3K real-world (initial question, answer, follow-up question)s collected from a forum layman providing Reddit-friendly explanations for open-ended questions.
In contrast to existing datasets, questions in FOLLOWUPQG use more diverse pragmatic strategies to seek information, and they also show higher-order cognitive skills.
arXiv Detail & Related papers (2023-09-10T11:58:29Z) - CREPE: Open-Domain Question Answering with False Presuppositions [92.20501870319765]
We introduce CREPE, a QA dataset containing a natural distribution of presupposition failures from online information-seeking forums.
We find that 25% of questions contain false presuppositions, and provide annotations for these presuppositions and their corrections.
We show that adaptations of existing open-domain QA models can find presuppositions moderately well, but struggle when predicting whether a presupposition is factually correct.
arXiv Detail & Related papers (2022-11-30T18:54:49Z) - What should I Ask: A Knowledge-driven Approach for Follow-up Questions
Generation in Conversational Surveys [63.51903260461746]
We propose a novel task for knowledge-driven follow-up question generation in conversational surveys.
We constructed a new human-annotated dataset of human-written follow-up questions with dialogue history and labeled knowledge.
We then propose a two-staged knowledge-driven model for the task, which generates informative and coherent follow-up questions.
arXiv Detail & Related papers (2022-05-23T00:57:33Z) - Generating Answer Candidates for Quizzes and Answer-Aware Question
Generators [16.44011627249311]
We propose a model that can generate a specified number of answer candidates for a given passage of text.
Our experiments show that our proposed answer candidate generation model outperforms several baselines.
arXiv Detail & Related papers (2021-08-29T19:33:51Z) - A Dataset of Information-Seeking Questions and Answers Anchored in
Research Papers [66.11048565324468]
We present a dataset of 5,049 questions over 1,585 Natural Language Processing papers.
Each question is written by an NLP practitioner who read only the title and abstract of the corresponding paper, and the question seeks information present in the full text.
We find that existing models that do well on other QA tasks do not perform well on answering these questions, underperforming humans by at least 27 F1 points when answering them from entire papers.
arXiv Detail & Related papers (2021-05-07T00:12:34Z) - Inquisitive Question Generation for High Level Text Comprehension [60.21497846332531]
We introduce INQUISITIVE, a dataset of 19K questions that are elicited while a person is reading through a document.
We show that readers engage in a series of pragmatic strategies to seek information.
We evaluate question generation models based on GPT-2 and show that our model is able to generate reasonable questions.
arXiv Detail & Related papers (2020-10-04T19:03:39Z) - Stay Hungry, Stay Focused: Generating Informative and Specific Questions
in Information-Seeking Conversations [41.74162467619795]
We investigate the problem of generating informative questions in information-asymmetric conversations.
To generate pragmatic questions, we use reinforcement learning to optimize an informativeness metric.
We demonstrate that the resulting pragmatic questioner substantially improves the informativeness and specificity of questions generated over a baseline model.
arXiv Detail & Related papers (2020-04-30T00:49:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.