Statistical Knowledge Assessment for Large Language Models
- URL: http://arxiv.org/abs/2305.10519v2
- Date: Sat, 28 Oct 2023 07:58:04 GMT
- Title: Statistical Knowledge Assessment for Large Language Models
- Authors: Qingxiu Dong, Jingjing Xu, Lingpeng Kong, Zhifang Sui and Lei Li
- Abstract summary: Given varying prompts regarding a factoid question, can a large language model (LLM) reliably generate factually correct answers?
We propose KaRR, a statistical approach to assess factual knowledge for LLMs.
Our results reveal that the knowledge in LLMs with the same backbone architecture adheres to the scaling law, while tuning on instruction-following data sometimes compromises the model's capability to generate factually correct text reliably.
- Score: 79.07989821512128
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Given varying prompts regarding a factoid question, can a large language
model (LLM) reliably generate factually correct answers? Existing LLMs may
generate distinct responses for different prompts. In this paper, we study the
problem of quantifying knowledge contained in an LLM regarding a given set of
facts. We propose KaRR, a statistical approach to assess factual knowledge for
LLMs. The main idea is to estimate the ratio of LLM generating text
corresponding to the answer entity given diverse prompts of the subject and the
querying relation, versus it generating by random chances. Our assessment suite
contains a comprehensive set of 994,123 entities and 600 relations, with
1,395,905 text aliases. We use our method to evaluate 20 LLMs of various sizes,
including LLaMA, Alpaca, OPT, etc. Experiments show that our results have a
strong correlation (0.43 Kendall's $\tau$) with the results of human assessment
on LLMs. Our results reveal that the knowledge in LLMs with the same backbone
architecture adheres to the scaling law, while tuning on instruction-following
data sometimes compromises the model's capability to generate factually correct
text reliably.
Related papers
- WikiContradict: A Benchmark for Evaluating LLMs on Real-World Knowledge Conflicts from Wikipedia [59.96425443250666]
Retrieval-augmented generation (RAG) has emerged as a promising solution to mitigate the limitations of large language models (LLMs)
In this work, we conduct a comprehensive evaluation of LLM-generated answers to questions based on contradictory passages from Wikipedia.
We benchmark a diverse range of both closed and open-source LLMs under different QA scenarios, including RAG with a single passage, and RAG with 2 contradictory passages.
arXiv Detail & Related papers (2024-06-19T20:13:42Z) - See the Unseen: Better Context-Consistent Knowledge-Editing by Noises [73.54237379082795]
Knowledge-editing updates knowledge of large language models (LLMs)
Existing works ignore this property and the editing lacks generalization.
We empirically find that the effects of different contexts upon LLMs in recalling the same knowledge follow a Gaussian-like distribution.
arXiv Detail & Related papers (2024-01-15T09:09:14Z) - Rephrase and Respond: Let Large Language Models Ask Better Questions for Themselves [57.974103113675795]
We present a method named Rephrase and Respond' (RaR) which allows Large Language Models to rephrase and expand questions posed by humans.
RaR serves as a simple yet effective prompting method for improving performance.
We show that RaR is complementary to the popular Chain-of-Thought (CoT) methods, both theoretically and empirically.
arXiv Detail & Related papers (2023-11-07T18:43:34Z) - Knowing What LLMs DO NOT Know: A Simple Yet Effective Self-Detection Method [36.24876571343749]
Large Language Models (LLMs) have shown great potential in Natural Language Processing (NLP) tasks.
Recent literature reveals that LLMs generate nonfactual responses intermittently.
We propose a novel self-detection method to detect which questions that a LLM does not know that are prone to generate nonfactual results.
arXiv Detail & Related papers (2023-10-27T06:22:14Z) - Assessing the Reliability of Large Language Model Knowledge [78.38870272050106]
Large language models (LLMs) have been treated as knowledge bases due to their strong performance in knowledge probing tasks.
How do we evaluate the capabilities of LLMs to consistently produce factually correct answers?
We propose MOdel kNowledge relIabiliTy scORe (MONITOR), a novel metric designed to directly measure LLMs' factual reliability.
arXiv Detail & Related papers (2023-10-15T12:40:30Z) - Survey on Factuality in Large Language Models: Knowledge, Retrieval and
Domain-Specificity [61.54815512469125]
This survey addresses the crucial issue of factuality in Large Language Models (LLMs)
As LLMs find applications across diverse domains, the reliability and accuracy of their outputs become vital.
arXiv Detail & Related papers (2023-10-11T14:18:03Z) - Head-to-Tail: How Knowledgeable are Large Language Models (LLMs)? A.K.A. Will LLMs Replace Knowledge Graphs? [24.931467926497152]
Head-to-Tail is a benchmark that consists of 18K question-answer pairs regarding head, torso, and tail facts in terms of popularity.
We show that existing LLMs are still far from being perfect in terms of their grasp of factual knowledge, especially for facts of torso-to-tail entities.
arXiv Detail & Related papers (2023-08-20T05:31:03Z) - Knowledge-Augmented Language Model Prompting for Zero-Shot Knowledge
Graph Question Answering [7.888547093390469]
Large Language Models (LLMs) are capable of performing zero-shot closed-book question answering tasks.
We propose to augment the knowledge directly in the input of LLMs.
Our framework, Knowledge-Augmented language model PromptING (KAPING), requires no model training, thus completely zero-shot.
arXiv Detail & Related papers (2023-06-07T04:15:21Z) - LLMMaps -- A Visual Metaphor for Stratified Evaluation of Large Language
Models [13.659853119356507]
Large Language Models (LLMs) have revolutionized natural language processing and demonstrated impressive capabilities in various tasks.
They are prone to hallucinations, where the model exposes incorrect or false information in its responses.
We propose LLMMaps as a novel visualization technique that enables users to evaluate LLMs' performance with respect to Q&A datasets.
arXiv Detail & Related papers (2023-04-02T05:47:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.