Evidence-Enhanced Triplet Generation Framework for Hallucination Alleviation in Generative Question Answering
- URL: http://arxiv.org/abs/2408.15037v1
- Date: Tue, 27 Aug 2024 13:07:07 GMT
- Title: Evidence-Enhanced Triplet Generation Framework for Hallucination Alleviation in Generative Question Answering
- Authors: Haowei Du, Huishuai Zhang, Dongyan Zhao,
- Abstract summary: We propose a novel evidence-enhanced triplet generation framework, EATQA, to predict all the combinations of (Question, Evidence, Answer) triplet.
We bridge the distribution gap to distill the knowledge from evidence in inference stage.
Our framework ensures the model to learn the logical relation between query, evidence and answer, which simultaneously improves the evidence generation and query answering.
- Score: 41.990482015732574
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: To address the hallucination in generative question answering (GQA) where the answer can not be derived from the document, we propose a novel evidence-enhanced triplet generation framework, EATQA, encouraging the model to predict all the combinations of (Question, Evidence, Answer) triplet by flipping the source pair and the target label to understand their logical relationships, i.e., predict Answer(A), Question(Q), and Evidence(E) given a QE, EA, and QA pairs, respectively. Furthermore, we bridge the distribution gap to distill the knowledge from evidence in inference stage. Our framework ensures the model to learn the logical relation between query, evidence and answer, which simultaneously improves the evidence generation and query answering. In this paper, we apply EATQA to LLama and it outperforms other LLMs-based methods and hallucination mitigation approaches on two challenging GQA benchmarks. Further analysis shows that our method not only keeps prior knowledge within LLM, but also mitigates hallucination and generates faithful answers.
Related papers
- Right for Right Reasons: Large Language Models for Verifiable Commonsense Knowledge Graph Question Answering [18.48602809114524]
Knowledge Graph Question Answering (KGQA) methods seek to answer Natural Language questions using the relational information stored in Knowledge Graphs (KGs)
With the recent advancements of Large Language Models (LLMs) and their remarkable reasoning abilities, there is a growing trend to leverage them for KGQA.
We propose Right for Right Reasons (R3), a commonsense KGQA methodology that allows for a verifiable reasoning procedure.
arXiv Detail & Related papers (2024-03-03T04:22:13Z) - Graph Elicitation for Guiding Multi-Step Reasoning in Large Language Models [16.432208223793666]
Chain-of-Thought prompting along with sub-question generation and answering has enhanced multi-step reasoning capabilities.
We propose a GE-Reasoning method, which directs Large Language Models to generate proper sub-questions and corresponding answers.
Our approach outperforms previous CoT prompting methods and their variants on multi-hop question answering benchmark datasets.
arXiv Detail & Related papers (2023-11-16T10:36:08Z) - An Empirical Comparison of LM-based Question and Answer Generation
Methods [79.31199020420827]
Question and answer generation (QAG) consists of generating a set of question-answer pairs given a context.
In this paper, we establish baselines with three different QAG methodologies that leverage sequence-to-sequence language model (LM) fine-tuning.
Experiments show that an end-to-end QAG model, which is computationally light at both training and inference times, is generally robust and outperforms other more convoluted approaches.
arXiv Detail & Related papers (2023-05-26T14:59:53Z) - Open-domain Question Answering via Chain of Reasoning over Heterogeneous
Knowledge [82.5582220249183]
We propose a novel open-domain question answering (ODQA) framework for answering single/multi-hop questions across heterogeneous knowledge sources.
Unlike previous methods that solely rely on the retriever for gathering all evidence in isolation, our intermediary performs a chain of reasoning over the retrieved set.
Our system achieves competitive performance on two ODQA datasets, OTT-QA and NQ, against tables and passages from Wikipedia.
arXiv Detail & Related papers (2022-10-22T03:21:32Z) - MuGER$^2$: Multi-Granularity Evidence Retrieval and Reasoning for Hybrid
Question Answering [32.850210766905505]
Hybrid question answering (HQA) aims to answer questions over heterogeneous data, including tables and passages linked to table cells.
We propose MuGER$2$, a Multi-Granularity Evidence Retrieval and Reasoning approach.
Experiment results on the HybridQA dataset show that MuGER$2$ significantly boosts the HQA performance.
arXiv Detail & Related papers (2022-10-19T07:36:03Z) - elBERto: Self-supervised Commonsense Learning for Question Answering [131.51059870970616]
We propose a Self-supervised Bidirectional Representation Learning of Commonsense framework, which is compatible with off-the-shelf QA model architectures.
The framework comprises five self-supervised tasks to force the model to fully exploit the additional training signals from contexts containing rich commonsense.
elBERto achieves substantial improvements on out-of-paragraph and no-effect questions where simple lexical similarity comparison does not help.
arXiv Detail & Related papers (2022-03-17T16:23:45Z) - Read before Generate! Faithful Long Form Question Answering with Machine
Reading [77.17898499652306]
Long-form question answering (LFQA) aims to generate a paragraph-length answer for a given question.
We propose a new end-to-end framework that jointly models answer generation and machine reading.
arXiv Detail & Related papers (2022-03-01T10:41:17Z) - Grow-and-Clip: Informative-yet-Concise Evidence Distillation for Answer
Explanation [22.20733260041759]
We argue that the evidences of an answer is critical to enhancing the interpretability of QA models.
We are the first to explicitly define the concept of evidence as the supporting facts in a context which are informative, concise, and readable.
We propose Grow-and-Clip Evidence Distillation (GCED) algorithm to extract evidences from the contexts by trade-off informativeness, conciseness, and readability.
arXiv Detail & Related papers (2022-01-13T17:18:17Z) - Harvesting and Refining Question-Answer Pairs for Unsupervised QA [95.9105154311491]
We introduce two approaches to improve unsupervised Question Answering (QA)
First, we harvest lexically and syntactically divergent questions from Wikipedia to automatically construct a corpus of question-answer pairs (named as RefQA)
Second, we take advantage of the QA model to extract more appropriate answers, which iteratively refines data over RefQA.
arXiv Detail & Related papers (2020-05-06T15:56:06Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.