ExplainCPE: A Free-text Explanation Benchmark of Chinese Pharmacist
Examination
- URL: http://arxiv.org/abs/2305.12945v2
- Date: Thu, 26 Oct 2023 03:15:41 GMT
- Title: ExplainCPE: A Free-text Explanation Benchmark of Chinese Pharmacist
Examination
- Authors: Dongfang Li, Jindi Yu, Baotian Hu, Zhenran Xu and Min Zhang
- Abstract summary: Existing explanation datasets are mostly English-language general knowledge questions.
To address the language bias and lack of medical resources in generating rationales QA datasets, we present ExplainCPE.
- Score: 26.878606171228448
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: As ChatGPT and GPT-4 spearhead the development of Large Language Models
(LLMs), more researchers are investigating their performance across various
tasks. But more research needs to be done on the interpretability capabilities
of LLMs, that is, the ability to generate reasons after an answer has been
given. Existing explanation datasets are mostly English-language general
knowledge questions, which leads to insufficient thematic and linguistic
diversity. To address the language bias and lack of medical resources in
generating rationales QA datasets, we present ExplainCPE (over 7k instances), a
challenging medical benchmark in Simplified Chinese. We analyzed the errors of
ChatGPT and GPT-4, pointing out the limitations of current LLMs in
understanding text and computational reasoning. During the experiment, we also
found that different LLMs have different preferences for in-context learning.
ExplainCPE presents a significant challenge, but its potential for further
investigation is promising, and it can be used to evaluate the ability of a
model to generate explanations. AI safety and trustworthiness need more
attention, and this work makes the first step to explore the medical
interpretability of LLMs.The dataset is available at
https://github.com/HITsz-TMG/ExplainCPE.
Related papers
- MedExQA: Medical Question Answering Benchmark with Multiple Explanations [2.2246416434538308]
This paper introduces MedExQA, a novel benchmark in medical question-answering to evaluate large language models' (LLMs) understanding of medical knowledge through explanations.
By constructing datasets across five distinct medical specialties, we address a major gap in current medical QA benchmarks.
Our work highlights the importance of explainability in medical LLMs, proposes an effective methodology for evaluating models beyond classification accuracy, and sheds light on one specific domain, speech language pathology.
arXiv Detail & Related papers (2024-06-10T14:47:04Z) - MedREQAL: Examining Medical Knowledge Recall of Large Language Models via Question Answering [5.065947993017158]
Large Language Models (LLMs) have demonstrated an impressive ability to encode knowledge during pre-training on large text corpora.
We examine the capability of LLMs to exhibit medical knowledge recall by constructing a novel dataset derived from systematic reviews.
arXiv Detail & Related papers (2024-06-09T16:33:28Z) - Crafting Interpretable Embeddings by Asking LLMs Questions [89.49960984640363]
Large language models (LLMs) have rapidly improved text embeddings for a growing array of natural-language processing tasks.
We introduce question-answering embeddings (QA-Emb), embeddings where each feature represents an answer to a yes/no question asked to an LLM.
We use QA-Emb to flexibly generate interpretable models for predicting fMRI voxel responses to language stimuli.
arXiv Detail & Related papers (2024-05-26T22:30:29Z) - Direct Evaluation of Chain-of-Thought in Multi-hop Reasoning with Knowledge Graphs [52.42505579545893]
Large language models (LLMs) demonstrate strong reasoning abilities when prompted to generate chain-of-thought explanations alongside answers.
We propose a novel discriminative and generative CoT evaluation paradigm to assess LLMs' knowledge of reasoning and the accuracy of the generated CoT.
arXiv Detail & Related papers (2024-02-17T05:22:56Z) - When LLMs Meet Cunning Texts: A Fallacy Understanding Benchmark for Large Language Models [59.84769254832941]
We propose a FaLlacy Understanding Benchmark (FLUB) containing cunning texts that are easy for humans to understand but difficult for models to grasp.
Specifically, the cunning texts that FLUB focuses on mainly consist of the tricky, humorous, and misleading texts collected from the real internet environment.
Based on FLUB, we investigate the performance of multiple representative and advanced LLMs.
arXiv Detail & Related papers (2024-02-16T22:12:53Z) - Learning to Generate Explainable Stock Predictions using Self-Reflective
Large Language Models [54.21695754082441]
We propose a framework to teach Large Language Models (LLMs) to generate explainable stock predictions.
A reflective agent learns how to explain past stock movements through self-reasoning, while the PPO trainer trains the model to generate the most likely explanations.
Our framework can outperform both traditional deep-learning and LLM methods in prediction accuracy and Matthews correlation coefficient.
arXiv Detail & Related papers (2024-02-06T03:18:58Z) - Rethinking Interpretability in the Era of Large Language Models [76.1947554386879]
Large language models (LLMs) have demonstrated remarkable capabilities across a wide array of tasks.
The capability to explain in natural language allows LLMs to expand the scale and complexity of patterns that can be given to a human.
These new capabilities raise new challenges, such as hallucinated explanations and immense computational costs.
arXiv Detail & Related papers (2024-01-30T17:38:54Z) - XplainLLM: A QA Explanation Dataset for Understanding LLM
Decision-Making [13.928951741632815]
Large Language Models (LLMs) have recently made impressive strides in natural language understanding tasks.
In this paper, we look into bringing some transparency to this process by introducing a new explanation dataset.
Our dataset includes 12,102 question-answer-explanation (QAE) triples.
arXiv Detail & Related papers (2023-11-15T00:34:28Z) - Self-Verification Improves Few-Shot Clinical Information Extraction [73.6905567014859]
Large language models (LLMs) have shown the potential to accelerate clinical curation via few-shot in-context learning.
They still struggle with issues regarding accuracy and interpretability, especially in mission-critical domains such as health.
Here, we explore a general mitigation framework using self-verification, which leverages the LLM to provide provenance for its own extraction and check its own outputs.
arXiv Detail & Related papers (2023-05-30T22:05:11Z) - Are Large Language Models Ready for Healthcare? A Comparative Study on
Clinical Language Understanding [12.128991867050487]
Large language models (LLMs) have made significant progress in various domains, including healthcare.
In this study, we evaluate state-of-the-art LLMs within the realm of clinical language understanding tasks.
arXiv Detail & Related papers (2023-04-09T16:31:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.