Tree of Clarifications: Answering Ambiguous Questions with
Retrieval-Augmented Large Language Models
- URL: http://arxiv.org/abs/2310.14696v1
- Date: Mon, 23 Oct 2023 08:42:49 GMT
- Title: Tree of Clarifications: Answering Ambiguous Questions with
Retrieval-Augmented Large Language Models
- Authors: Gangwoo Kim, Sungdong Kim, Byeongguk Jeon, Joonsuk Park, Jaewoo Kang
- Abstract summary: Tree of Clarifications (ToC) is a framework to generate a long-form answer to ambiguous questions.
ToC outperforms existing baselines on ASQA in a few-shot setup across the metrics.
- Score: 30.186503757127188
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Questions in open-domain question answering are often ambiguous, allowing
multiple interpretations. One approach to handling them is to identify all
possible interpretations of the ambiguous question (AQ) and to generate a
long-form answer addressing them all, as suggested by Stelmakh et al., (2022).
While it provides a comprehensive response without bothering the user for
clarification, considering multiple dimensions of ambiguity and gathering
corresponding knowledge remains a challenge. To cope with the challenge, we
propose a novel framework, Tree of Clarifications (ToC): It recursively
constructs a tree of disambiguations for the AQ -- via few-shot prompting
leveraging external knowledge -- and uses it to generate a long-form answer.
ToC outperforms existing baselines on ASQA in a few-shot setup across the
metrics, while surpassing fully-supervised baselines trained on the whole
training set in terms of Disambig-F1 and Disambig-ROUGE. Code is available at
https://github.com/gankim/tree-of-clarifications.
Related papers
- Unsupervised multiple choices question answering via universal corpus [27.78825771434918]
We propose a novel framework designed to generate synthetic multiple choice question answering (MCQA) data.
We leverage both named entities (NE) and knowledge graphs to discover plausible distractors to form complete synthetic samples.
arXiv Detail & Related papers (2024-02-27T09:10:28Z) - Probabilistic Tree-of-thought Reasoning for Answering
Knowledge-intensive Complex Questions [93.40614719648386]
Large language models (LLMs) are capable of answering knowledge-intensive complex questions with chain-of-thought (CoT) reasoning.
Recent works turn to retrieving external knowledge to augment CoT reasoning.
We propose a novel approach: Probabilistic Tree-of-thought Reasoning (ProbTree)
arXiv Detail & Related papers (2023-11-23T12:52:37Z) - Long-form Question Answering: An Iterative Planning-Retrieval-Generation
Approach [28.849548176802262]
Long-form question answering (LFQA) poses a challenge as it involves generating detailed answers in the form of paragraphs.
We propose an LFQA model with iterative Planning, Retrieval, and Generation.
We find that our model outperforms the state-of-the-art models on various textual and factual metrics for the LFQA task.
arXiv Detail & Related papers (2023-11-15T21:22:27Z) - Open-Set Knowledge-Based Visual Question Answering with Inference Paths [79.55742631375063]
The purpose of Knowledge-Based Visual Question Answering (KB-VQA) is to provide a correct answer to the question with the aid of external knowledge bases.
We propose a new retriever-ranker paradigm of KB-VQA, Graph pATH rankER (GATHER for brevity)
Specifically, it contains graph constructing, pruning, and path-level ranking, which not only retrieves accurate answers but also provides inference paths that explain the reasoning process.
arXiv Detail & Related papers (2023-10-12T09:12:50Z) - Answering Ambiguous Questions via Iterative Prompting [84.3426020642704]
In open-domain question answering, due to the ambiguity of questions, multiple plausible answers may exist.
One approach is to directly predict all valid answers, but this can struggle with balancing relevance and diversity.
We present AmbigPrompt to address the imperfections of existing approaches to answering ambiguous questions.
arXiv Detail & Related papers (2023-07-08T04:32:17Z) - Reasoning over Hierarchical Question Decomposition Tree for Explainable
Question Answering [83.74210749046551]
We propose to leverage question decomposing for heterogeneous knowledge integration.
We propose a novel two-stage XQA framework, Reasoning over Hierarchical Question Decomposition Tree (RoHT)
Experiments on complex QA datasets KQA Pro and Musique show that our framework outperforms SOTA methods significantly.
arXiv Detail & Related papers (2023-05-24T11:45:59Z) - Asking Clarification Questions to Handle Ambiguity in Open-Domain QA [25.80369529145732]
We propose to ask a clarification question, where the user's response will help identify the interpretation that best aligns with the user's intention.
We first present CAMBIGNQ, a dataset consisting of 5,654 ambiguous questions.
We then define a pipeline of tasks and design appropriate evaluation metrics.
arXiv Detail & Related papers (2023-05-23T08:20:01Z) - Keeping the Questions Conversational: Using Structured Representations
to Resolve Dependency in Conversational Question Answering [26.997542897342164]
We propose a novel framework, CONVSR (CONVQA using Structured Representations) for capturing and generating intermediate representations as conversational cues.
We test our model on the QuAC and CANARD datasets and illustrate by experimental results that our proposed framework achieves a better F1 score than the standard question rewriting model.
arXiv Detail & Related papers (2023-04-14T13:42:32Z) - Multifaceted Improvements for Conversational Open-Domain Question
Answering [54.913313912927045]
We propose a framework with Multifaceted Improvements for Conversational open-domain Question Answering (MICQA)
Firstly, the proposed KL-divergence based regularization is able to lead to a better question understanding for retrieval and answer reading.
Second, the added post-ranker module can push more relevant passages to the top placements and be selected for reader with a two-aspect constrains.
Third, the well designed curriculum learning strategy effectively narrows the gap between the golden passage settings of training and inference, and encourages the reader to find true answer without the golden passage assistance.
arXiv Detail & Related papers (2022-04-01T07:54:27Z) - ClarQ: A large-scale and diverse dataset for Clarification Question
Generation [67.1162903046619]
We devise a novel bootstrapping framework that assists in the creation of a diverse, large-scale dataset of clarification questions based on postcomments extracted from stackexchange.
We quantitatively demonstrate the utility of the newly created dataset by applying it to the downstream task of question-answering.
We release this dataset in order to foster research into the field of clarification question generation with the larger goal of enhancing dialog and question answering systems.
arXiv Detail & Related papers (2020-06-10T17:56:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.