Follow-Up Questions Improve Documents Generated by Large Language Models
- URL: http://arxiv.org/abs/2407.12017v1
- Date: Thu, 27 Jun 2024 07:16:46 GMT
- Title: Follow-Up Questions Improve Documents Generated by Large Language Models
- Authors: Bernadette J Tix,
- Abstract summary: This study investigates the impact of Large Language Models generating follow up questions in response to user requests for short text documents.
The findings of this study show clear benefits to question-asking both in document preference and in the qualitative user experience.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: This study investigates the impact of Large Language Models generating follow up questions in response to user requests for short text documents. Users provided prompts requesting documents they would like the AI to produce. The AI then generated questions to clarify the user needs before generating the requested documents. Users answered the questions and then indicated their preference between a document generated using both the initial prompt and the questions and answers, and a document generated using only the initial prompt, and gave feedback about their experience with the question-answering process. The findings of this study show clear benefits to question-asking both in document preference and in the qualitative user experience.
Related papers
- Auto FAQ Generation [0.0]
We propose a system for generating FAQ documents that extract the salient questions and their corresponding answers from sizeable text documents.
We use existing text summarization, sentence ranking via the Text rank algorithm, and question-generation tools to create an initial set of questions and answers.
arXiv Detail & Related papers (2024-05-13T03:30:27Z) - JDocQA: Japanese Document Question Answering Dataset for Generative Language Models [15.950718839723027]
We introduce Japanese Document Question Answering (JDocQA), a large-scale document-based QA dataset.
It comprises 5,504 documents in PDF format and annotated 11,600 question-and-answer instances in Japanese.
We incorporate multiple categories of questions and unanswerable questions from the document for realistic question-answering applications.
arXiv Detail & Related papers (2024-03-28T14:22:54Z) - Discourse Analysis via Questions and Answers: Parsing Dependency
Structures of Questions Under Discussion [57.43781399856913]
This work adopts the linguistic framework of Questions Under Discussion (QUD) for discourse analysis.
We characterize relationships between sentences as free-form questions, in contrast to exhaustive fine-grained questions.
We develop the first-of-its-kind QUD that derives a dependency structure of questions over full documents.
arXiv Detail & Related papers (2022-10-12T03:53:12Z) - Generate rather than Retrieve: Large Language Models are Strong Context
Generators [74.87021992611672]
We present a novel perspective for solving knowledge-intensive tasks by replacing document retrievers with large language model generators.
We call our method generate-then-read (GenRead), which first prompts a large language model to generate contextutal documents based on a given question, and then reads the generated documents to produce the final answer.
arXiv Detail & Related papers (2022-09-21T01:30:59Z) - V-Doc : Visual questions answers with Documents [1.6785823565413143]
V-Doc is a question-answering tool using document images and PDF.
It supports generating and using both extractive and abstractive question-answer pairs.
arXiv Detail & Related papers (2022-05-27T02:38:09Z) - Design Challenges for a Multi-Perspective Search Engine [44.48345943046946]
We study a new perspective-oriented document retrieval paradigm.
We discuss and assess the inherent natural language understanding challenges in order to achieve the goal.
We use the prototype system to conduct a user survey in order to assess the utility of our paradigm.
arXiv Detail & Related papers (2021-12-15T18:59:57Z) - Open-Retrieval Conversational Machine Reading [80.13988353794586]
In conversational machine reading, systems need to interpret natural language rules, answer high-level questions, and ask follow-up clarification questions.
Existing works assume the rule text is provided for each user question, which neglects the essential retrieval step in real scenarios.
In this work, we propose and investigate an open-retrieval setting of conversational machine reading.
arXiv Detail & Related papers (2021-02-17T08:55:01Z) - Knowledge-Aided Open-Domain Question Answering [58.712857964048446]
We propose a knowledge-aided open-domain QA (KAQA) method which targets at improving relevant document retrieval and answer reranking.
During document retrieval, a candidate document is scored by considering its relationship to the question and other documents.
During answer reranking, a candidate answer is reranked using not only its own context but also the clues from other documents.
arXiv Detail & Related papers (2020-06-09T13:28:57Z) - Conversations with Documents. An Exploration of Document-Centered
Assistance [55.60379539074692]
Document-centered assistance, for example, to help an individual quickly review a document, has seen less significant progress.
We present a survey to understand the space of document-centered assistance and the capabilities people expect in this scenario.
We present a set of initial machine learned models that show that (a) we can accurately detect document-centered questions, and (b) we can build reasonably accurate models for answering such questions.
arXiv Detail & Related papers (2020-01-27T17:10:11Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.