TruthEval: A Dataset to Evaluate LLM Truthfulness and Reliability
- URL: http://arxiv.org/abs/2406.01855v1
- Date: Tue, 4 Jun 2024 00:01:35 GMT
- Title: TruthEval: A Dataset to Evaluate LLM Truthfulness and Reliability
- Authors: Aisha Khatun, Daniel G. Brown,
- Abstract summary: We present a curated collection of challenging statements on sensitive topics for benchmarking called TruthEval.
These statements were curated by hand and contain known truth values.
We perform some initial analyses using this dataset and find several instances of LLMs failing in simple tasks showing their inability to understand simple questions.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Large Language Model (LLM) evaluation is currently one of the most important areas of research, with existing benchmarks proving to be insufficient and not completely representative of LLMs' various capabilities. We present a curated collection of challenging statements on sensitive topics for LLM benchmarking called TruthEval. These statements were curated by hand and contain known truth values. The categories were chosen to distinguish LLMs' abilities from their stochastic nature. We perform some initial analyses using this dataset and find several instances of LLMs failing in simple tasks showing their inability to understand simple questions.
Related papers
Err
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.