Follow-Up Questions Improve Documents Generated by Large Language Models
- URL: http://arxiv.org/abs/2407.12017v2
- Date: Thu, 15 Aug 2024 07:12:33 GMT
- Title: Follow-Up Questions Improve Documents Generated by Large Language Models
- Authors: Bernadette J Tix,
- Abstract summary: This study investigates the impact of Large Language Models (LLMs) generating follow-up questions in response to user requests for short (1-page) text documents.
Users interacted with a novel web-based AI system designed to ask follow-up questions.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: This study investigates the impact of Large Language Models (LLMs) generating follow-up questions in response to user requests for short (1-page) text documents. Users interacted with a novel web-based AI system designed to ask follow-up questions. Users requested documents they would like the AI to produce. The AI then generated follow-up questions to clarify the user's needs or offer additional insights before generating the requested documents. After answering the questions, users were shown a document generated using both the initial request and the questions and answers, and a document generated using only the initial request. Users indicated which document they preferred and gave feedback about their experience with the question-answering process. The findings of this study show clear benefits to question-asking both in document preference and in the qualitative user experience. This study further shows that users found more value in questions which were thought-provoking, open-ended, or offered unique insights into the user's request as opposed to simple information-gathering questions.
Related papers
- Auto FAQ Generation [0.0]
We propose a system for generating FAQ documents that extract the salient questions and their corresponding answers from sizeable text documents.
We use existing text summarization, sentence ranking via the Text rank algorithm, and question-generation tools to create an initial set of questions and answers.
arXiv Detail & Related papers (2024-05-13T03:30:27Z) - JDocQA: Japanese Document Question Answering Dataset for Generative Language Models [15.950718839723027]
We introduce Japanese Document Question Answering (JDocQA), a large-scale document-based QA dataset.
It comprises 5,504 documents in PDF format and annotated 11,600 question-and-answer instances in Japanese.
We incorporate multiple categories of questions and unanswerable questions from the document for realistic question-answering applications.
arXiv Detail & Related papers (2024-03-28T14:22:54Z) - Generate rather than Retrieve: Large Language Models are Strong Context
Generators [74.87021992611672]
We present a novel perspective for solving knowledge-intensive tasks by replacing document retrievers with large language model generators.
We call our method generate-then-read (GenRead), which first prompts a large language model to generate contextutal documents based on a given question, and then reads the generated documents to produce the final answer.
arXiv Detail & Related papers (2022-09-21T01:30:59Z) - Asking the Right Questions in Low Resource Template Extraction [37.77304148934836]
We ask whether end users of TE systems can design these questions, and whether it is beneficial to involve an NLP practitioner in the process.
We propose a novel model to perform TE with prompts, and find it benefits from questions over other styles of prompts.
arXiv Detail & Related papers (2022-05-25T10:39:09Z) - What should I Ask: A Knowledge-driven Approach for Follow-up Questions
Generation in Conversational Surveys [63.51903260461746]
We propose a novel task for knowledge-driven follow-up question generation in conversational surveys.
We constructed a new human-annotated dataset of human-written follow-up questions with dialogue history and labeled knowledge.
We then propose a two-staged knowledge-driven model for the task, which generates informative and coherent follow-up questions.
arXiv Detail & Related papers (2022-05-23T00:57:33Z) - Evaluating Mixed-initiative Conversational Search Systems via User
Simulation [9.066817876491053]
We propose a conversational User Simulator, called USi, for automatic evaluation of such search systems.
We show that responses generated by USi are both inline with the underlying information need and comparable to human-generated answers.
arXiv Detail & Related papers (2022-04-17T16:27:33Z) - Design Challenges for a Multi-Perspective Search Engine [44.48345943046946]
We study a new perspective-oriented document retrieval paradigm.
We discuss and assess the inherent natural language understanding challenges in order to achieve the goal.
We use the prototype system to conduct a user survey in order to assess the utility of our paradigm.
arXiv Detail & Related papers (2021-12-15T18:59:57Z) - A Dataset of Information-Seeking Questions and Answers Anchored in
Research Papers [66.11048565324468]
We present a dataset of 5,049 questions over 1,585 Natural Language Processing papers.
Each question is written by an NLP practitioner who read only the title and abstract of the corresponding paper, and the question seeks information present in the full text.
We find that existing models that do well on other QA tasks do not perform well on answering these questions, underperforming humans by at least 27 F1 points when answering them from entire papers.
arXiv Detail & Related papers (2021-05-07T00:12:34Z) - Open-Retrieval Conversational Machine Reading [80.13988353794586]
In conversational machine reading, systems need to interpret natural language rules, answer high-level questions, and ask follow-up clarification questions.
Existing works assume the rule text is provided for each user question, which neglects the essential retrieval step in real scenarios.
In this work, we propose and investigate an open-retrieval setting of conversational machine reading.
arXiv Detail & Related papers (2021-02-17T08:55:01Z) - Conversations with Documents. An Exploration of Document-Centered
Assistance [55.60379539074692]
Document-centered assistance, for example, to help an individual quickly review a document, has seen less significant progress.
We present a survey to understand the space of document-centered assistance and the capabilities people expect in this scenario.
We present a set of initial machine learned models that show that (a) we can accurately detect document-centered questions, and (b) we can build reasonably accurate models for answering such questions.
arXiv Detail & Related papers (2020-01-27T17:10:11Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.