XplainLLM: A QA Explanation Dataset for Understanding LLM
Decision-Making
- URL: http://arxiv.org/abs/2311.08614v1
- Date: Wed, 15 Nov 2023 00:34:28 GMT
- Title: XplainLLM: A QA Explanation Dataset for Understanding LLM
Decision-Making
- Authors: Zichen Chen, Jianda Chen, Mitali Gaidhani, Ambuj Singh, Misha Sra
- Abstract summary: Large Language Models (LLMs) have recently made impressive strides in natural language understanding tasks.
In this paper, we look into bringing some transparency to this process by introducing a new explanation dataset.
Our dataset includes 12,102 question-answer-explanation (QAE) triples.
- Score: 13.928951741632815
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Large Language Models (LLMs) have recently made impressive strides in natural
language understanding tasks. Despite their remarkable performance,
understanding their decision-making process remains a big challenge. In this
paper, we look into bringing some transparency to this process by introducing a
new explanation dataset for question answering (QA) tasks that integrates
knowledge graphs (KGs) in a novel way. Our dataset includes 12,102
question-answer-explanation (QAE) triples. Each explanation in the dataset
links the LLM's reasoning to entities and relations in the KGs. The explanation
component includes a why-choose explanation, a why-not-choose explanation, and
a set of reason-elements that underlie the LLM's decision. We leverage KGs and
graph attention networks (GAT) to find the reason-elements and transform them
into why-choose and why-not-choose explanations that are comprehensible to
humans. Through quantitative and qualitative evaluations, we demonstrate the
potential of our dataset to improve the in-context learning of LLMs, and
enhance their interpretability and explainability. Our work contributes to the
field of explainable AI by enabling a deeper understanding of the LLMs
decision-making process to make them more transparent and thereby, potentially
more reliable, to researchers and practitioners alike. Our dataset is available
at: https://github.com/chen-zichen/XplainLLM_dataset.git
Related papers
- Revisiting the Graph Reasoning Ability of Large Language Models: Case Studies in Translation, Connectivity and Shortest Path [53.71787069694794]
We focus on the graph reasoning ability of Large Language Models (LLMs)
We revisit the ability of LLMs on three fundamental graph tasks: graph description translation, graph connectivity, and the shortest-path problem.
Our findings suggest that LLMs can fail to understand graph structures through text descriptions and exhibit varying performance for all these fundamental tasks.
arXiv Detail & Related papers (2024-08-18T16:26:39Z) - Reasoning with Large Language Models, a Survey [2.831296564800826]
This paper reviews the rapidly expanding field of prompt-based reasoning with LLMs.
Our taxonomy identifies different ways to generate, evaluate, and control multi-step reasoning.
We find that self-improvement, self-reflection, and some meta abilities of the reasoning processes are possible through the judicious use of prompts.
arXiv Detail & Related papers (2024-07-16T08:49:35Z) - Direct Evaluation of Chain-of-Thought in Multi-hop Reasoning with Knowledge Graphs [52.42505579545893]
Large language models (LLMs) demonstrate strong reasoning abilities when prompted to generate chain-of-thought explanations alongside answers.
We propose a novel discriminative and generative CoT evaluation paradigm to assess LLMs' knowledge of reasoning and the accuracy of the generated CoT.
arXiv Detail & Related papers (2024-02-17T05:22:56Z) - FaithLM: Towards Faithful Explanations for Large Language Models [67.29893340289779]
Large Language Models (LLMs) have become proficient in addressing complex tasks by leveraging their internal knowledge and reasoning capabilities.
The black-box nature of these models complicates the task of explaining their decision-making processes.
We introduce FaithLM to explain the decision of LLMs with natural language (NL) explanations.
arXiv Detail & Related papers (2024-02-07T09:09:14Z) - Learning to Generate Explainable Stock Predictions using Self-Reflective
Large Language Models [54.21695754082441]
We propose a framework to teach Large Language Models (LLMs) to generate explainable stock predictions.
A reflective agent learns how to explain past stock movements through self-reasoning, while the PPO trainer trains the model to generate the most likely explanations.
Our framework can outperform both traditional deep-learning and LLM methods in prediction accuracy and Matthews correlation coefficient.
arXiv Detail & Related papers (2024-02-06T03:18:58Z) - Leveraging Structured Information for Explainable Multi-hop Question
Answering and Reasoning [14.219239732584368]
In this work, we investigate constructing and leveraging extracted semantic structures (graphs) for multi-hop question answering.
Empirical results and human evaluations show that our framework: generates more faithful reasoning chains and substantially improves the QA performance on two benchmark datasets.
arXiv Detail & Related papers (2023-11-07T05:32:39Z) - ExplainCPE: A Free-text Explanation Benchmark of Chinese Pharmacist
Examination [26.878606171228448]
Existing explanation datasets are mostly English-language general knowledge questions.
To address the language bias and lack of medical resources in generating rationales QA datasets, we present ExplainCPE.
arXiv Detail & Related papers (2023-05-22T11:45:42Z) - Search-in-the-Chain: Interactively Enhancing Large Language Models with
Search for Knowledge-intensive Tasks [121.74957524305283]
This paper proposes a novel framework named textbfSearch-in-the-Chain (SearChain) for the interaction between Information Retrieval (IR) and Large Language Model (LLM)
Experiments show that SearChain outperforms state-of-the-art baselines on complex knowledge-intensive tasks.
arXiv Detail & Related papers (2023-04-28T10:15:25Z) - LMExplainer: Grounding Knowledge and Explaining Language Models [37.578973458651944]
Language models (LMs) like GPT-4 are important in AI applications, but their opaque decision-making process reduces user trust, especially in safety-critical areas.
We introduce LMExplainer, a novel knowledge-grounded explainer that clarifies the reasoning process of LMs through intuitive, human-understandable explanations.
arXiv Detail & Related papers (2023-03-29T08:59:44Z) - Empowering Language Models with Knowledge Graph Reasoning for Question
Answering [117.79170629640525]
We propose knOwledge REasOning empowered Language Model (OREO-LM)
OREO-LM consists of a novel Knowledge Interaction Layer that can be flexibly plugged into existing Transformer-based LMs.
We show significant performance gain, achieving state-of-art results in the Closed-Book setting.
arXiv Detail & Related papers (2022-11-15T18:26:26Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.